G1.1 | Machine learning for geodesy
EDI
Machine learning for geodesy
Convener: Benedikt Soja | Co-conveners: Maria KaselimiECSECS, Milad Asgarimehr, Sadegh ModiriECSECS, Lotfi MassarwehECSECS
Orals
| Wed, 06 May, 08:30–10:15 (CEST)
 
Room K2
Posters on site
| Attendance Wed, 06 May, 16:15–18:00 (CEST) | Display Wed, 06 May, 14:00–18:00
 
Hall X1
Posters virtual
| Thu, 07 May, 14:00–15:45 (CEST)
 
vPoster spot 3, Thu, 07 May, 16:15–18:00 (CEST)
 
vPoster Discussion
Orals |
Wed, 08:30
Wed, 16:15
Thu, 14:00
This session aims to showcase novel applications of methods from the field of artificial intelligence and machine learning in geodesy.

In recent years, the exponential growth of geodetic data from various observation techniques has created challenges and opportunities. Innovative approaches are required to efficiently handle and harness the vast amount of geodetic data available nowadays for scientific purposes, for example when dealing with “big data” from Global Navigation Satellite System (GNSS) and Interferometric Synthetic Aperture Radar (InSAR). Likewise, numerical weather models and other environmental models important for geodesy come with ever-growing resolutions and dimensions. Strategies and methodologies from the fields of artificial intelligence and machine learning have shown great potential not only in this context but also when applied to more limited data sets to solve complex non-linear problems in geodesy.

We invite contributions related to various aspects of applying methods from artificial intelligence and machine learning (including both shallow and deep learning techniques) to geodetic problems and data sets. We welcome investigations related to (but not limited to): more efficient and automated processing of geodetic data, pattern and anomaly detection in geodetic time series, images or higher-dimensional data sets, improved predictions of geodetic parameters, such as Earth orientation or atmospheric parameters into the future, combination and extraction of information from multiple inhomogeneous data sets (multi-temporal, multi-sensor, multi-modal fusion), feature selection and sensitivity, downscaling geodetic data, and improvements of large-scale simulations. We strongly encourage contributions that address crucial aspects of uncertainty quantification and integration of physical relationships into data-driven frameworks. Moreover, addressing reproducibility, interpretability and explainability of machine learning outcomes is certainly fundamental for ensuring scientific rigorousness of novel AI-based solutions.

By combining the power of artificial intelligence with geodetic science, we aim to open new horizons in our understanding of Earth's dynamic geophysical processes. Join us in this session to explore how the fusion of physics and machine learning promises advantages in generalization, consistency, and extrapolation, ultimately advancing the frontiers of geodesy.

Orals: Wed, 6 May, 08:30–10:15 | Room K2

The oral presentations are given in a hybrid format supported by a Zoom meeting featuring on-site and virtual presentations. The button to access the Zoom meeting appears just before the time block starts.
Chairperson: Benedikt Soja
08:30–08:35
08:35–08:45
|
EGU26-3285
|
ECS
|
On-site presentation
Arno Rüegg, Shuyin Mao, and Benedikt Soja

The ionosphere introduces dispersive delays on GNSS signals, with the magnitude of the error determined by the slant total electron content (STEC) along the satellite–receiver path. Standard correction products, such as Global Ionospheric Maps (GIMs), estimate vertical TEC (VTEC) on coarse spatio-temporal grids, relying on thin-shell assumptions and mapping functions to convert between STEC and VTEC. While effective for many applications, these simplifications limit accuracy, particularly during disturbed ionospheric conditions.

In this work, we present a machine learning–based model for direct STEC prediction, avoiding the need for VTEC mapping. The model is implemented as a ResNet-like multi-layer perceptron (MLP) trained with Gaussian negative log likelihood loss, which allows us to provide uncertainties along with the STEC. To ensure global applicability, the dataset spans observations of the IGS network from 2014 until 2025 and thus more than a full solar cycle, covering diurnal, seasonal, and solar variability. Input features include spatial geometry (station and satellite coordinates, azimuth, elevation), temporal information (time of day, day of year), and space weather indices, enabling the network to capture both spatio-temporal dependencies and heliophysical drivers of ionospheric variability.

The pretrained model shows strong agreement with observed STEC (r = 0.95, R² = 0.90) and generalizes robustly across years without daily fitting. Errors scale with STEC magnitude but remain unbiased, reflecting physically consistent behavior under varying ionospheric conditions. On temporally held-out data, the mean absolute error is ~7.2 TECU, with improved performance for interpolation (~4.6 TECU) compared to extrapolation (~9.2 TECU). Daily fine-tuning additionally improves performance, particularly at low elevation angles where VTEC-based mapping functions are weakest, while maintaining comparable accuracy at high elevations. Performance on unseen stations is competitive with established VTEC-based models and global ionospheric maps.

By directly modelling STEC from raw GNSS observations across a solar cycle, this approach provides a flexible, observation-driven alternative to mapping function based models, with applications in precise GNSS positioning, space weather monitoring, and multi-technique ionospheric research.

 

How to cite: Rüegg, A., Mao, S., and Soja, B.: Ionospheric Slant TEC Modelling Based on GNSS Data with Machine Learning, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3285, https://doi.org/10.5194/egusphere-egu26-3285, 2026.

08:45–08:55
|
EGU26-2814
|
ECS
|
On-site presentation
Zhenyi Zhang and Benedikt Soja

Tropospheric wet delay remains a key error source for space geodesy, including GNSS, VLBI, and InSAR. Empirical models such as GPT3 are widely used, yet they rely on simplified parameterizations and fixed coefficient tables that limit modeling capacity. Frequent updates are difficult because the entire archive must be reprocessed. With the rapid progress of machine learning, it is natural to seek ML-based tropospheric models that improve both accuracy and efficiency. To date, most work has focused on zenith wet delay (ZWD), which is essentially one-dimensional, while fully data-driven slant modeling has been largely unexplored. Slant wet delays (SWD) are inherently anisotropic, which makes the task more challenging.

We propose a hybrid ML framework that embeds a physical layer inside the network to predict SWD end-to-end and yields consistent ZWD and wet mapping function as internal outputs. Training uses hundreds of millions of ERA5 ray-traced samples from 2018 to 2022 with global coverage. The resulting ML model outperforms GPT3 for SWD, with markedly lower errors over continental regions where most space-geodetic stations operate and with the largest gains at low elevation angles and along coasts. The learned mapping is asymmetric in elevation and azimuth, which removes the need for explicit horizontal gradients. As ancillary products, the framework provides ZWD that surpasses GPT3 and a wet mapping function that exceeds the symmetric GPT3 variant and is comparable to the asymmetric one. We also develop augmented variants that accept surface temperature and water vapor pressure as inputs and obtain further accuracy gains. To our knowledge, this is the first ML-based model that directly predicts SWD. The model is compact and faster than GPT3 when applied to large sample sets. The hybrid design supports efficient fine-tuning with new observations and provides a practical path to maintainable routine processing and continued advances in space-geodetic troposphere modeling.

How to cite: Zhang, Z. and Soja, B.: Hybrid Machine-Learning Framework for Slant Wet Delay Modeling, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-2814, https://doi.org/10.5194/egusphere-egu26-2814, 2026.

08:55–09:05
|
EGU26-14292
|
ECS
|
Virtual presentation
Nihal Tekin Ünlütürk and Mehmet Bak

Zenith Total Delay (ZTD) derived from Global Navigation Satellite System (GNSS) observations is a critical parameter for precise positioning and atmospheric studies. ZTD is continuously estimated at GNSS stations, forming high-resolution temporal time series that reflect the dynamic behavior of the troposphere. In recent years, deep learning approaches have been increasingly applied to ZTD estimation due to their ability to model nonlinear relationships. However, particularly for time-series reconstruction problems, it remains an open question whether the added architectural complexity of such models is always necessary.

In this study, feature-driven classical machine learning models and a data-driven neural network approach are systematically compared for reconstructing missing ZTD values in GNSS time series. The analysis is based on ZTD observations from six International GNSS Service (IGS) stations covering the period from February 2023 to January 2024. All models are trained using an identical feature set comprising lagged ZTD values, tropospheric gradients, ZTD variances, station coordinates, and temporal attributes. This design ensures a fair and interpretable comparison between different modeling approaches.

Linear Regression is considered as a baseline model, while Random Forest represents a nonlinear yet interpretable machine learning approach, and a Fully Connected Neural Network (FNN) is employed as a deep learning model. Model performance is evaluated using Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and the coefficient of determination (R²), with a leave-one-station-out validation strategy applied to assess generalization capability.

The results indicate that the Random Forest model achieves accuracy comparable to that of the FNN, while exhibiting greater stability and consistency across stations. The results highlight that incorporating physically meaningful features into the input space can be as effective as increasing model complexity for ZTD reconstruction. The study provides methodological and practical insights for selecting appropriate modeling strategies in tropospheric delay estimation.

How to cite: Tekin Ünlütürk, N. and Bak, M.: Comparing Feature-Driven and Data-Driven Models for GNSS-Based ZTD Reconstruction, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-14292, https://doi.org/10.5194/egusphere-egu26-14292, 2026.

09:05–09:15
|
EGU26-16707
|
ECS
|
On-site presentation
Saeid Haji-Aghajany and Witold Rohm

Accurately representing atmospheric humidity is critical for Artificial Intelligence (AI)-based weather forecasting models, which mostly rely on physics-based weather data such as ERA5 in both training and deployment stages. Unlike physics-based weather forecasting models, which continuously use humidity observations from different sources through data assimilation techniques, AI weather models are not equipped with a data assimilation framework to integrate observations into their system. This is one of the most important limitations of these models, which limits their ability to forecast small-scale weather events that are mainly due to convection processes and are related to humidity. To address this gap, there is a need to develop an AI-based data assimilation framework for integrating reliable observations into current AI-based weather forecasting models as an auxiliary component.

Global Navigation Satellite Systems (GNSS) observations provide reliable humidity measurements with strong sensitivity to the wet refractivity of the atmosphere, which plays an important role in numerical weather prediction, GNSS positioning, and atmospheric monitoring.

In this study, as an initial step, we present a physics-informed deep learning-based framework to assimilate ground-based and space-based GNSS data into ERA5 Three-Dimensional (3D) wet refractivity fields.

The proposed framework assimilates ground-based GNSS Zenith Wet Delays (ZWD), GNSS Radio Occultation (RO) profiles, radiosonde measurements, and voxel mask data that represent the number of signal rays intersecting each voxel, as derived from a ray-tracing technique, to update an initial 3D wet refractivity field from ERA5 data. A 3D Convolutional Neural Network (3D-CNN), which uses residual and convolutional block attention modules, is employed to capture the nonlinear relationships between multi-source observations and 3D wet refractivity distributions. The assimilation procedure is formulated using a hybrid physics-informed loss function that simultaneously constrains (i) GNSS ZWD consistency at station locations, (ii) voxel-wise agreement with RO-derived wet refractivity, (iii) adherence to the ERA5-based initial state, and (iv) bias reduction in ZWD. The updated 3D wet refractivity field is evaluated using ZWD derived from independent GNSS observations and radiosonde measurements.

The obtained results demonstrate that the proposed deep learning-based assimilation framework significantly improves 3D wet refractivity estimation and ZWD accuracy relative to the initial ERA5-driven state, while producing physically consistent structures. The framework provides a scalable pathway for assimilating humidity data from different types of GNSS measurements and other remote sensing techniques into reanalysis datasets, thereby enhancing the meteorological parameters used in AI-based weather forecasting models.

How to cite: Haji-Aghajany, S. and Rohm, W.: AI-Based GNSS Data Assimilation of ERA5 3D Wet Refractivity Fields: Initial Results, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-16707, https://doi.org/10.5194/egusphere-egu26-16707, 2026.

09:15–09:25
|
EGU26-21895
|
ECS
|
On-site presentation
Alexandros Matakos, Carlos Peralta, Nico Renaldo, Jaakko Santala, and Kim Kaisti

GNSS Meteorology is an increasingly important source of atmospheric observations, providing near-real-time information on tropospheric water vapor derived from GNSS signal delays. These observations become especially valuable when available at high density, enabling improved characterization of mesoscale moisture gradients and rapidly evolving atmospheric structures. Skyfora’s Telecom GNSS Meteorology enables such dense coverage by repurposing existing telecom infrastructure as a distributed atmospheric sensing network and extracting GNSS-derived delay information at scale. Such observation streams are particularly valuable in regions where conventional radiosonde, radar, or dense surface networks are limited.

In this contribution, we present an AI-enabled data assimilation method for integrating GNSS-derived tropospheric parameters into modern weather modelling systems. Rather than relying on classical variational methods alone (3D-Var, 4D-Var) and their associated linearized observation operators and background-error assumptions, the approach leverages generative, physics-informed machine learning models to produce dynamically consistent atmospheric state estimates while accounting for observational uncertainty and irregular sampling.

We further highlight the practical deployment of GNSS Meteorology through two demonstration case studies: (i) a national-scale network trial in Latvia and (ii) a live demonstration in the Barcelona region. Together, these cases illustrate how GNSS-derived atmospheric observations can be operationalized into scalable atmospheric monitoring capabilities. The results emphasize the potential of combining novel observation networks with AI-based assimilation to enhance atmospheric situational awareness and support future improvements in forecast skill.

How to cite: Matakos, A., Peralta, C., Renaldo, N., Santala, J., and Kaisti, K.: GNSS Meteorology: AI-Enabled Assimilation of GNSS-derived Tropospheric Parameters with Two Demonstrated Case Studies, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-21895, https://doi.org/10.5194/egusphere-egu26-21895, 2026.

09:25–09:35
|
EGU26-3777
|
ECS
|
On-site presentation
Jinzhao Si, Shuangcheng Zhang, Jinqi Zhao, and Zhong Lu

Tropospheric delay is primary limitation for surface deformation retrieval using Synthetic Aperture Radar Interferometry (InSAR), presents challenges due to its spatiotemporal heterogeneity. This study proposes a correction framework integrating geodetic parameter estimation theory with deep learning. It aims to robustly estimate topography-correlated stratified delays while also rigorously applying a new deep learning network to suppress turbulent delays. Initially, a quadtree segmentation method is employed to partition the area of interest. Within each homogeneous segment, the topography-correlated stratified delay phase is robustly estimated using an adaptive-order functional model fitted via weighted least squares. Subsequently, the time-domain differentiation technique is applied to isolate high-frequency turbulent signals, thereby constructing a realistic turbulent sample dataset. Finally, by integrating the strengths of the U-Net architecture and the Convolutional Block Attention Module (CBAM), a Spatio-Temporal Turbulence U-Net (STTU-Net) is designed based on the statistical spatio-temporal characteristics of the real turbulence sample dataset. This network learns the detailed evolution of random turbulent fields, enabling an efficient, data-driven deep learning approach for turbulent delay correction. Applied to Sentinel-1 data over Southern California, the method reduces the average interferogram phase standard deviation by 27% and weakens phase-elevation correlation. After full correction, the RMSE between InSAR and GNSS time series decreases from 4.7 cm to 2.2 cm. The estimated total delays also agree well with GNSS-ZTD (correlation: 0.84; RMSD: 1.94 cm). Results from simulated data confirm that this method effectively suppresses tropospheric delay while fully preserving genuine deformation signals of varying characteristics, thereby providing a systematic and verifiable solution for tropospheric delay correction in InSAR.

How to cite: Si, J., Zhang, S., Zhao, J., and Lu, Z.: Geodesy-Informed Deep Learning for InSAR Tropospheric Correction: Adaptive Weighted least squares and STTU-Net, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3777, https://doi.org/10.5194/egusphere-egu26-3777, 2026.

09:35–09:45
|
EGU26-11670
|
On-site presentation
Qi Zhang and Teng Wang

Interferogram denoising is a critical step in interferometric synthetic aperture radar (InSAR) data processing, as it directly affects the accuracy and reliability of final products, such as surface deformation measurements and digital elevation models. Current state-of-the-art data-driven denoising models primarily rely on training datasets with noise simulated using statistical models, such as the complex Gaussian distribution. While effective in controlled scenarios, these simulated noises often fail to capture the intricate variability of real-world interferometric noise, leading to degraded performance in practical applications. In this study, we propose a semi-supervised self-boosting learning method for InSAR phase denoising, marking the first instance of training a model using real-world noise. The method consists of two phases: (1) Model excitation, where the model is initially trained with simulated noise to develop basic denoising capabilities; (2) Refinement boosting, an unsupervised, iterative process where the model gradually refines itself using real-world noise. Specifically, this phase involves four interconnected steps: noise extraction, where noise is extracted from real interferograms; noise purification, which removes residual signal components; data augmentation, where purified noise is updated into the training dataset; and model enhancement, which iteratively refines the model to improve its generalization to real interferograms. We identified the cost-optimal denoising model by conducting experiments across network architectures of varying complexity, using identical training datasets and experimental settings. Experimental results validate the effectiveness of the proposed method on both synthetic interferograms with varying coherence levels and real Sentinel-1 interferograms. On synthetic data, the method demonstrates superior denoising performance, achieving the lowest root mean square errors (RMSE) and highest structural similarity index measures (SSIM) compared to state-of-the-art techniques such as NL-InSAR and InSAR-BM3D, while maintaining comparable inference speeds to simpler methods like BoxCar. On Sentinel-1 interferograms, the approach consistently delivers improved denoising results, as evidenced by fewest phase residues and smoothest phase unwrapping. Our findings also reveal that when training data is comprehensive and well-aligned, increasing model complexity does not necessarily lead to giant improvement; simpler architectures can yield results comparable to those of more sophisticated models. Additionally, by analyzing noises simulated from the coherence-guided statistical model and those extracted from Sentinel-1 interferograms, we observe a significant discrepancy between simulated and real noise distributions, with the former failing to capture the complexities of real-world noise. This underscores the importance of incorporating real-world noise into training datasets for InSAR data-driven models, e.g., denoising, unwrapping, and other applications. Overall, this research introduces a robust methodology for interferogram denoising and enhances our understanding of the complexities of real-world interferometric noise, paving the way for further advancements in noise modeling and interferogram restoration.

How to cite: Zhang, Q. and Wang, T.: A Semi-Supervised Self-Boosting Learning Method for InSAR Phase Denoising, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-11670, https://doi.org/10.5194/egusphere-egu26-11670, 2026.

09:45–09:55
|
EGU26-14591
|
ECS
|
On-site presentation
Mehdi Shafiei Joud, Fan Yang, and Shivam Chawla

Hydrological and hydro-geodetic applications increasingly require high-resolution characterization of aquifer-scale water-storage dynamics over large regions, while existing hydrology models and gravimetric observations operate at much coarser spatial resolution. InSAR provides dense surface-deformation observations sensitive to subsurface water-storage changes, but converting these measurements into reliable, vertically resolved hydrological information remains an ill-posed and computationally demanding inversion problem, especially for multi-decadal, multi-mission data sets.

We present a physics-aware machine-learning framework to enable large-scale, high-resolution hydro-geodetic inversion from long-term InSAR time series. Independent InSAR deformation time series from open SAR missions (ERS-1/2, Envisat, ALOS-1/2, Sentinel-1) are processed using reproducible workflows and harmonized across wavelengths and acquisition geometries to form spatio-temporal deformation volumes. These data are inverted using a 3D Swin Transformer U-Net constrained by elastic and poroelastic forward deformation operators and basin-scale mass conservation. Hydrological models and gravimetric observations are used as structured supervision rather than ground truth, ensuring physical consistency and stability of the inversion.

The live demonstration emphasizes scalable execution on high-performance computing platforms, reliable inversion of massive InSAR data batches, and interpretable aquifer-scale hydrological responses, including characteristic lag behaviour. The framework supports high-resolution hydrological model improvement and provides physically consistent inputs for groundwater-related geohazard assessment, such as subsidence and compaction risk.

Key words: Physics-aware machine learning; 3D Swin Transformer U-Net; InSAR time-series inversion; hydro-geodetic analysis; high-performance computing (HPC)

How to cite: Shafiei Joud, M., Yang, F., and Chawla, S.: Physics-Aware Machine Learning for Large-Scale Hydro-Geodetic Inference from InSAR, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-14591, https://doi.org/10.5194/egusphere-egu26-14591, 2026.

09:55–10:05
|
EGU26-11511
|
ECS
|
On-site presentation
Valentin Kasburg and Nina Kukowski

High-resolution laser strainmeter measurements in an underground gallery at Moxa Geodynamic Observatory (Thuringia, central Germany) provide detailed records of crustal deformation. Beyond Earth tides, deformation induced by pore-pressure fluctuations produces the largest signals. These observations reveal that the relationship between groundwater transport and crustal strain is temporally fluctuating, highlighting the need for a quantitative approach to systematically characterize potential coupling.

Here, we present a physics-guided, data-driven approach for estimating effective groundwater–strain coupling from multivariate time series of groundwater levels and nanometer-scale strain measurements, based on linear Biot poro-elasticity. The approach incorporates physically guided Biot neurons into an autoregressive neural network architecture; these neurons model horizontal poro-elastic responses of fractured rock driven by groundwater variations. It dynamically adjusts to temporal changes in groundwater levels and the resulting pore-pressure-induced strain. Using orientation-specific laser strainmeter measurements and spatially distributed groundwater levels from boreholes, we estimate Biot coupling in two horizontal directions (North–South and East–West) and derive effective coupling parameters from over a decade of observatory records.

Our results provide insights into the dynamic hydro-mechanical behaviour of the shallow crust and highlight the potential of physics-guided neural architectures to support the interpretation of high-resolution deformation and stress–strain responses in geomechanical studies.

How to cite: Kasburg, V. and Kukowski, N.: Physics-Guided Neural Network Parameter Estimation of Groundwater–Strain Coupling at Moxa Geodynamic Observatory, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-11511, https://doi.org/10.5194/egusphere-egu26-11511, 2026.

10:05–10:15
|
EGU26-12469
|
ECS
|
On-site presentation
Gilberto Goracci and Ilias Daras

Terrestrial Water Storage Anomalies (TWSA) derived from satellite gravimetry provide a unique, integrated measure of large-scale hydrological conditions and have been widely used to characterize flood-prone states at basin scale. However, their applicability for flood monitoring and early warning is constrained by the coarse spatial resolution of current gravimetric products (approximately 300 km), which limits the detection of localized storage anomalies relevant for flood initiation and timing.

An unsupervised deep-learning approach based on convolutional neural networks is used to downscale TWS grids from Next-Generation Gravity Mission (NGGM) and ESA–NASA MAGIC constellation simulated products from 3° to 1° with a five daily frequency. With this new constellation, the NASA-DLR GRACE-C pair will ensure continuity and spatial coverage, while the ESA NGGM will provide increased sensitivity, novel products and pre-operational capabilities, contributing for over 90% to the MAGIC mission performance over the hydrologically relevant areas. The joint ESA-NASA MAGIC constellation will provide 5-daily gravity field products on a global scale with a spatial resolution of approximately 200 km, also reducing latency and uncertainties with respect to present gravimetry missions.

Simulated satellite products corresponding to the GRACE-C, NGGM and MAGIC mission scenarios are obtained from realistic end-to-end (E2E) closed-loop experiments carried out in the context of ESA studies, while the downscaling task is assigned to a U-Net module integrating ERA5-Land climate variables and the ETOPO2022 Digital Elevation Model. The ground truth solution of the gravity field is given by the ESA ESM 2.0, allowing for a-posteriori validation of the downscaling framework. The downscaled maps present high spatial and temporal correlation with the ground truth, reconstructing fine-scale TWS patterns without losing structural coherence at the native resolution of the satellite product. The pipeline is also extended to real GRACE/GRACE-FO data for real-world applications. The obtained results highlight the potential of unsupervised machine learning approaches for regional hydrological monitoring.

For this purpose, the derived downscaled products are used for the identification of critical hydrological states in the context of extreme event detection using classical threshold-based indicators derived from TWS climatological anomalies. The analysis discusses how improved spatial resolution may help preserve sharper anomaly structures that are otherwise smoothed at coarse scales, with possible benefits for the timely identification of critical states. The early-warning pipeline is evaluated in terms of event detection skill and timing based on the true critical states derived from the ground truth TWS in the closed-loop simulations and then extended to real GRACE data.

How to cite: Goracci, G. and Daras, I.: Towards hydrological extreme event monitoring using deep learning-downscaled NGGM and MAGIC data, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-12469, https://doi.org/10.5194/egusphere-egu26-12469, 2026.

Posters on site: Wed, 6 May, 16:15–18:00 | Hall X1

The posters scheduled for on-site presentation are only visible in the poster hall in Vienna. If authors uploaded their presentation files, these files are linked from the abstracts below.
Display time: Wed, 6 May, 14:00–18:00
Chairperson: Benedikt Soja
X1.66
|
EGU26-2392
Yi Zhang, Yubin Liu, Yunlong Wu, and Qipei Pang

Gravity data, comprising a key foundational dataset, are crucial for various research, including land subsidence monitoring, geological exploration, and navigational positioning. However, the collection of gravity data in specific regions is difficult because of environmental, technical, and economic constraints, resulting in a non-uniform distribution of the observational data. Traditionally, interpolation methods such as Kriging have been widely used to deal with data gaps; however, their predictive accuracy in regions with sparse data still needs improvement. In recent years, the rapid development of artificial intelligence has opened up a new opportunity for data prediction. In this study, utilizing the EGM2008 satellite gravity model, we conducted a comprehensive analysis of three machine learning algorithms—random forest, support vector machine, and recurrent neural network—and compared their performances against the traditional Kriging interpolation method. The results indicate that machine learning methods exhibit a marked advantage in gravity data prediction, significantly enhancing the predictive accuracy.

How to cite: Zhang, Y., Liu, Y., Wu, Y., and Pang, Q.: Gravity Predictions in Data-Missing Areas Using Machine Learning Methods, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-2392, https://doi.org/10.5194/egusphere-egu26-2392, 2026.

X1.67
|
EGU26-6625
|
ECS
Machine Learning Based Prediction of Astronomical Seeing Using All-Sky Camera Images and Cloud Sensor Data
(withdrawn)
Slindile Nyide
X1.68
|
EGU26-6644
|
ECS
Belhadj Attaouia, Kaddour Chouicha, and Kahlouche Salem

 

Monitoring the deformation behavior of embankment dams is essential to ensure their structural integrity and long-term performance. Conventional geodetic methods, such as precision leveling, offer high spatial accuracy but limited temporal coverage, while in-situ geotechnical sensors, such as settlement meters, provide continuous but localized measurements. This study proposes an improved method of geodetic/geometrical deformation analysis based on strain theory, by merging displacement data from leveling and settlement meters to estimate settlements at the dam surface. This approach, based on MMC principles, allows for an appropriate visualization and interpretation of the settlement occurring and can be used for the detection of abnormal settlement behavior. Applied to a real case of an Algerian rockfill dam, the proposed method shows, after some validation and comparison with subsequent research results, a good adequacy in identifying settlement behavior. The results highlight the reliability and robustness of the geodetic model improved by multi-sensor data fusion in estimating deformations in critical geotechnical infrastructures.

Key words. Geodetic model, Strain analysis, Multi-sensor data fusion, dam settlement, levelling, settlement meter.

How to cite: Attaouia, B., Chouicha, K., and Salem, K.: Advanced Geodetic Deformation Analysis Based on Multi-Sensor Data Fusion., EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-6644, https://doi.org/10.5194/egusphere-egu26-6644, 2026.

X1.69
|
EGU26-6725
|
ECS
Jakub Rados and Anna Klos

Precise monitoring of Earth's crustal deformation relies heavily on the analysis of Global Positioning System (GPS) displacement time series recorded by the set of ground-based antennas. This monitoring is possible for the period during which the GPS station is operational. In this presentation, we undertake one of the first attempts to forecast daily GPS displacements for 13 randomly selected stations in Europe, for which the displacements were recorded from 1996 to 2023. We use vertical displacements provided by the Nevada Geodetic Laboratory (NGL) and pre-process them thoroughly. We then apply the Long Short-Term Memory (LSTM) network, one of the Deep Learning approaches, and evaluate its efficiency for long-term forecasting of GPS displacements over two-year horizon. The performance of the data-driven LSTM network is compared against the standard statistical AutoRegressive Integrated Moving Average (ARIMA) prediction method. We also quantify the impact of data pre-processing strategies on forecast accuracy, specifically gap-filling techniques, such as linear interpolation, Piecewise Cubic Hermite Interpolating Polynomial (PCHIP), a seasonality-based Least Squares (LS) reconstruction, and raw data processing without interpolation are assessed. The models were evaluated using Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and the coefficient of determination (R2). Statistical significance was assessed using the Friedman test followed by the Nemenyi post-hoc test. Results indicate that the LSTM network significantly outperforms the ARIMA model in long-term forecasting. The hybrid approach combining LSTM with LS-based interpolation yielded the highest accuracy. Furthermore, degradation analysis reveals that the LSTM model maintains stability and lower error accumulation over the forecast horizon. These findings indicate that LSTM networks, particularly when combined with seasonality-aware interpolation (LS), offer a significant improvement in forecasting accuracy and stability compared to the standard ARIMA model. The results underscore the substantial potential of deep learning methodologies in geodetic time series analysis, encouraging their further exploration as robust alternatives to statistic approaches.

How to cite: Rados, J. and Klos, A.: Forecasting GPS displacements with a Long Short-Term Memory (LSTM) network, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-6725, https://doi.org/10.5194/egusphere-egu26-6725, 2026.

X1.70
|
EGU26-6833
|
ECS
fenghe qiu, Thomas Gruber, and Roland Pail

Regional sea level change is driven by multiple physical processes, resulting in complex dynamics and pronounced spatio–temporal heterogeneity. This study proposes a hybrid framework that integrates the physical fingerprints with deep learning to achieve both sea level budget closure and temporal prediction of regional sea level variations. Total sea level changes are firstly decomposed into the steric and barystatic components. By further considering the mass redistribution of ice sheets, glaciers, and terrestrial water storage and their associated sea level fingerprints, the cryo–hydrological contribution (CHC) sea level, is introduced to replace the traditional barystatic term. This substitutes direct observations of local mass change with the sea level response to mass redistributions occurring elsewhere, thereby enhancing the physical interpretability of the decomposition. Subsequently, a convolutional neural network and bidirectional long short-term memory hybrid model is employed to jointly predict the total, steric, barystatic, and CHC sea level components.

We quantify the impacts of mass variations in cryo–hydrological domain on sea level changes across 20 oceanic regions, achieving a quantitative projection from mass redistribution to sea level response. Results demonstrate excellent budget closure within the analyzed regions, with mean correlation coefficients exceeding 0.9 and root mean square difference of approximately 15 mm. In the temporal domain, the deep learning network effectively reproduces both long-term trends and seasonal oscillations (correlation ≥ 0.8 in most prediction windows). From a physical perspective, the presented study establishes the regional sea level response to cryo–hydrological mass redistribution and demonstrates strong practical relevance.

How to cite: qiu, F., Gruber, T., and Pail, R.: Linking Cryo–Hydrological Mass Redistribution to Regional Sea Level Change through Hybrid Physics–AI Modelling, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-6833, https://doi.org/10.5194/egusphere-egu26-6833, 2026.

X1.71
|
EGU26-9475
|
ECS
Lingke Wang and Hansjörg Kuttere

Precise Point Positioning (PPP) provides high accuracy positioning using a single GNSS receiver, but its performance is often limited by station-specific multipath errors. This study presents a novel Principal Component Analysis (PCA) and machine learning based Multipath Hemispherical Map (PM-MHM) approach to mitigate multipath effects in PPP mode. Network-wide correlated errors (NWCEs), including satellite orbit and clock errors as well as unmodeled tropospheric wet delays, are first isolated and removed using PCA, allowing the remaining station-specific residuals to be interpreted as multipath. The PM-MHM employs a hybrid machine learning framework that integrates a global model with localized, grid-specific correction models to adaptively capture multipath patterns. In this study, we develop an automatic fitting training scheme that evaluates multiple algorithms, including Random Forest, Least Squares Boosting and Extreme Gradient Boosting. The model is trained on six consecutive days of PPP residuals and evaluated on independent datasets, demonstrating superior performance compared with the Trend Surface Analysis-based MHM (T-MHM). For pseudorange and carrier phase observations, PM-MHM achieves mean RMSE reductions of 39.8% and 37.3%, respectively, outperforming T-MHM by 10-15%. Furthermore, PCA decomposition of Up-component residuals reveals that the low frequency portion of the first principal component (PC1_low) effectively captures tropospheric zenith wet delay (ZWD) variations. Incorporation of PC1_low into GNSS-derived ZWD improves correlation with radiometer measurements by about 0.08 and reduces RMSE by 6.32%. These results demonstrate that PM-MHM not only offers high accuracy multipath mitigation but also enables the physical analysis of other residual components beyond multipath, highlighting its potential for improved PPP-based atmospheric monitoring and high precision positioning applications. 

How to cite: Wang, L. and Kuttere, H.: Observation-Level PCA and Machine Learning for Multipath Mitigation in GNSS PPP with Tropospheric Wet Delay Assessment, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-9475, https://doi.org/10.5194/egusphere-egu26-9475, 2026.

X1.72
|
EGU26-18677
|
ECS
Alejandra Barreto, Nathaniel Wire, Áslaug Birgisdóttir, Adriano Nobile, Halldór Geirsson, and Sigurjón Jónsson

Recent studies of the Reykjanes Peninsula in Iceland have shown that Interferometric Synthetic Aperture Radar (InSAR) data can reveal surface movements across new and pre-existing fractures associated with stress changes during volcanic dyke intrusions. These data have revealed many fractures in areas where optical imagery or field observations are obscured by vegetation, infrastructure, or young lava flows. Mapping active faults and fractures in geologically dynamic regions is essential for assessing tectonic and volcanic hazards, as pre-existing fractures and crustal weaknesses can control magma pathways, dyke propagation, and the location of eruptive activity. However, systematic fracture mapping from wrapped interferograms remains a time-consuming manual task. Deep learning approaches have been widely utilized successfully for mapping faults in seismic data and optical images, and similarly, to detect glacier crevasses in SAR backscatter images. Here, we investigate the feasibility of automatic fracture mapping directly from wrapped interferograms using deep learning, focusing on the current volcanic unrest on the Reykjanes Peninsula. We address the task as a binary classification problem, and implement a convolutional neural network with a U-Net architecture trained using a Dice loss to address strong class imbalance. We initially trained our model on a rather small, highly imbalanced dataset consisting of Sentinel-1 and TerraSAR-X interferograms of the area from September 2023 to February 2024. Despite relatively modest F1-Score (~56%), the model successfully identifies all major fracture movements in the test data and is able to detect features absent from the original labels, providing a fairly robust fracture map that can be easily refined. These results demonstrate that deep learning can be used to extract meaningful structural information from wrapped interferograms, even with limited data and imperfect training labels, and constitute to the best our knowledge the first application of deep learning to fracture mapping in wrapped interferograms. Current work is aimed at improving the model performance by including the latest fractures dataset of the Reykjanes Peninsula, consisting of fractures mapped on TerraSAR-X interferograms from September 2021 to July 2024. Additionally, ongoing efforts are focused on generating physically realistic synthetic interferograms that capture the complexity of fracturing and fracture reactivation due to dyke emplacement and propagation and other sources of deformation. By addressing the current limitations, this approach has the potential to enable transferable fracture-mapping workflows applicable across diverse tectonic settings and InSAR datasets, contributing to more efficient geohazard monitoring.

How to cite: Barreto, A., Wire, N., Birgisdóttir, Á., Nobile, A., Geirsson, H., and Jónsson, S.: Fracture mapping in InSAR data using deep learning, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-18677, https://doi.org/10.5194/egusphere-egu26-18677, 2026.

X1.73
|
EGU26-20001
|
ECS
Milad Asgarimehr, Daixin Zhao, Tianqi Xiao, Hamed Izadgoshasb, Jens Wickert, and Ridvan Kuzu

With growing concerns about climate change, increasing natural hazards, and extreme weather events, monitoring Earth’s surface parameters has become a critical area of interest for both the scientific community and society. Global Navigation Satellite System Reflectometry (GNSS-R) is an innovative and low-cost technique that exploits existing Global Navigation Satellite System (GNSS) signals after reflection from Earth’s surface. GNSS-R constellations offer unique observations with unprecedented data volume, temporal resolution, and spatial coverage across the entire globe under all-weather conditions. As the data volumes are continuously accumulating, the trend in applying Artificial Intelligence (AI) is expanding. However, current AI models rely heavily on labelled data, feature engineering, and extra fine-tuning, leading to high computational and labor costs. To address these issues, we propose the project EcoGEM: Energy-efficient Multimodal GNSS Reflectometry Models for Generalist Earth Surface Monitoring and Hazard Response.

EcoGEM develops cutting-edge Earth observation foundation models using GNSS-R measurements and integrates them with other remote sensing data. It pioneers the first general-purpose GNSS-R foundation models and curated multimodal datasets to support climate science, hazard detection, and environmental monitoring. Unlike task-specific methods, the proposed models adapt across applications such as soil moisture, vegetation water content, and ocean wind speed. Uniquely, EcoGEM emphasizes energy-efficient AI through model pruning, knowledge distillation, and dynamic architectures, enabling deployment on edge devices and small satellite platforms. This collaborative project of GFZ and DLR advances sustainable AI and promotes novel and open-access tools for Earth scientists, environmental policymakers, and global users.

How to cite: Asgarimehr, M., Zhao, D., Xiao, T., Izadgoshasb, H., Wickert, J., and Kuzu, R.: GNSS Reflectometry AI based Models for Matvariate Earth Surface Monitoring and Hazard Response  , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-20001, https://doi.org/10.5194/egusphere-egu26-20001, 2026.

Posters virtual: Thu, 7 May, 14:00–18:00 | vPoster spot 3

The posters scheduled for virtual presentation are given in a hybrid format for on-site presentation, followed by virtual discussions on Zoom. Attendees are asked to meet the authors during the scheduled presentation & discussion time for live video chats; onsite attendees are invited to visit the virtual poster sessions at the vPoster spots (equal to PICO spots). If authors uploaded their presentation files, these files are also linked from the abstracts below. The button to access the Zoom meeting appears just before the time block starts.
Discussion time: Thu, 7 May, 16:15–18:00
Display time: Thu, 7 May, 14:00–18:00
Chairpersons: Roelof Rietbroek, Holly Stein, Laetitia Le Pourhiet

EGU26-7178 | ECS | Posters virtual | VPS25

Comparative Analysis of Machine Learning and Geostatistical Approaches for GNSS-InSAR Integration: A Case Study in Anatolia 

Müfide Elvanlı and Murat Durmaz
Thu, 07 May, 14:00–14:03 (CEST)   vPoster spot 3

The main focus of the study is to calibrate Sentinel-1 InSAR Line-of-Sight (LOS) velocities along a ~700 km North-South transect extending from the Black Sea coast (Kastamonu-Samsun) to the Mediterranean (Mersin-Gaziantep). This transect encompasses diverse tectonic regimes, including the North Anatolian Fault Zone, the Central Anatolian Block, and the junction of the East Anatolian Fault Zone. This complex structure of the transect requires detailed analysis of the GNSS-InSAR calibration procedure including validation. 

Across the study region, processed LiCSAR products are integrated with 3D velocities derived from the continuous local CORS network (21 stations) and an extensive campaign-based GNSS network (200 stations). For calibration, GNSS velocities are first projected into the satellite LOS geometry using LOS vectors derived from coherent InSAR pixels within a 1-km radius. The velocity bias (ΔVlos) is calculated at continuous GNSS locations. This correction surface is propagated using various conventional and Machine Learning techniques independently, including Kriging, Weighted Least Squares (WLS) based Quadratic Surface fitting, Thin Plate Spline (TPS) and Radial Basis Functions (Gaussian, Multiquadric, and Inverse Multiquadric). To address specific error sources, the contributions of topography-correlated atmospheric delays and local spatial trends are also analyzed by Geographically Weighted Regression (GWR) and Random Forest regression. Cross-validation is applied to assess the quality of each model individually where spatial random sampling and plate boundaries are also considered. This study presents preliminary results for obtaining a validated basis for generating up-to-date velocity fields over Türkiye.

How to cite: Elvanlı, M. and Durmaz, M.: Comparative Analysis of Machine Learning and Geostatistical Approaches for GNSS-InSAR Integration: A Case Study in Anatolia, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-7178, https://doi.org/10.5194/egusphere-egu26-7178, 2026.

Please check your login data.