VPS22 | GI/ESSI/NP
GI/ESSI/NP
Co-organized by ESSI/GI/NP
Conveners: Davide Faranda, Pietro Tizzani, Kirsten Elger, Christof Lorenz
Posters virtual
| Wed, 06 May, 14:00–15:45 (CEST)
 
vPoster spot 1b, Wed, 06 May, 16:15–18:00 (CEST)
 
vPoster Discussion
Wed, 14:00

Posters virtual: Wed, 6 May, 14:00–18:00 | vPoster spot 1b

The posters scheduled for virtual presentation are given in a hybrid format for on-site presentation, followed by virtual discussions on Zoom. Attendees are asked to meet the authors during the scheduled presentation & discussion time for live video chats; onsite attendees are invited to visit the virtual poster sessions at the vPoster spots (equal to PICO spots). If authors uploaded their presentation files, these files are also linked from the abstracts below. The button to access the Zoom meeting appears just before the time block starts.
Discussion time: Wed, 6 May, 16:15–18:00
Display time: Wed, 6 May, 14:00–18:00
14:00–14:03
|
EGU26-3619
|
Origin: ESSI1.11
|
ECS
Rodrigo Uribe-Ventura, Willem Viveen, Ferdinand Pineda-Ancco, and César Beltrán-Castañon

Landslides claim thousands of lives and cause billions in economic losses annually, with impacts disproportionately concentrated in developing regions across Asia, Africa, and Latin America. Paradoxically, the current trajectory of artificial intelligence in geohazard detection—characterized by billion-parameter foundation models requiring substantial computational infrastructure—risks widening, rather than closing, the gap between technological capability and operational deployment where it is needed most. We argue that this paradigm requires fundamental reconsideration, proposing domain adaptation on strategically curated geological datasets as a more equitable and effective path toward globally accessible landslide detection systems.

Foundation models like the Segment Anything Model (SAM), pre-trained on over one billion masks, demand computational resources—312 million parameters, 1,376 GFLOPs per inference, specialized GPU infrastructure—that remain inaccessible to disaster management agencies in resource-constrained regions. Beyond these practical constraints, we contend that the apparent generalization capabilities of such models reflect pattern coverage in training data rather than emergent understanding transferable to geological contexts. The SA-1B dataset, despite its scale, was not curated to systematically represent landslide morphological diversity, creating coverage gaps for rare failure types, unusual triggering mechanisms, and underrepresented terrain configurations precisely where robust detection is operationally critical.

Given these limitations, we propose that effective generalization for geological applications emerges not from architectural scale but from strategic coverage of domain-relevant pattern space. We developed and tested GeoNeXt, a lightweight architecture that exploits the hierarchical transferability of geological features through targeted domain adaptation. Low-level representations (edges, spectral gradients) transfer universally across sensors and terrain; mid-level patterns (drainage networks, slope morphology) require adaptation to local expressions; and high-level configurations (failure geometries, trigger signatures) demand targeted training. Our results showed that this approach outperformed SAM-based methods across three independent benchmarks while requiring 10× fewer parameters (32.2M versus 312.5M) and a 62% reduction in computational cost. Zero-shot transferability to geographically distinct test sites (74–78% F1 score) emerged from the training dataset's systematic morphological diversity rather than parameter count. Inference at 10.6 frames per second on standard hardware, versus 3.0 frames per second for foundation model alternatives, transforms theoretical capability into deployable technology for resource-constrained environments. These findings suggest that strategic domain adaptation, rather than architectural scale, offers the most viable path toward operational landslide detection in vulnerable regions.

How to cite: Uribe-Ventura, R., Viveen, W., Pineda-Ancco, F., and Beltrán-Castañon, C.: Democratizing landslide detection for vulnerable regions beyond resource-intensive foundation models, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3619, https://doi.org/10.5194/egusphere-egu26-3619, 2026.

14:03–14:06
|
EGU26-3080
|
Origin: ESSI1.4
|
ECS
Chen Li and Baoyu Du

Hyperspectral image (HSI) classification often struggles with feature interference across different scales and the inherent challenges of data imbalance and sample scarcity. While deep learning models have significantly advanced the field, traditional single-branch architectures often suffer from scale-related noise, where features from different receptive fields interfere with one another. To address this, we propose the Multibranch Adaptive Feature Fusion Network (MBAFFN). Our approach utilizes three parallel branches to independently extract scale-specific features, effectively decoupling the multiscale information to prevent interference. This architecture is enhanced by two specialized modules: Global Detail Attention (GDA) for capturing broad contextual dependencies and Distance Suppression Attention (DSA) for refining local pixel-level discrimination. Furthermore, a pixel-wise adaptive fusion mechanism is introduced to dynamically weigh and integrate these features, prioritizing the most relevant scales for final classification. The performance of MBAFFN was validated on four benchmark datasets: Indian Pines (IP), Pavia University (PU), Longkou (LK), and Hanchuan (HC). Compared to current state-of-the-art methods, our model improved Overall Accuracy (OA) by 0.91%, 1.71%, 0.86%, and 3.16% on the IP, PU, LK, and HC datasets, respectively. The significant improvement on the HC and PU datasets underscores the model’s robustness in scenarios with limited training samples and complex class distributions. These results, supported by detailed ablation studies, demonstrate that adaptive fusion and scale-specific branching are effective strategies for mitigating feature interference in hyperspectral analysis.

How to cite: Li, C. and Du, B.: Multibranch Adaptive Feature Fusion for Hyperspectral Image Classification, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3080, https://doi.org/10.5194/egusphere-egu26-3080, 2026.

14:06–14:09
|
EGU26-11945
|
Origin: ESSI1.18
Yang Chen, Yian Yu, Lulu Zhao, Kathryn Whitman, Ward Manchester, and Tamas Gombosi

Solar phenomena such as flares, coronal mass ejections (CMEs), and solar energetic particles (SEPs) are actively monitored and assessed for space weather hazards. In recent years, machine learning has demonstrated considerable success in solar flare forecasting. Accurate SEP forecasting remains challenging in space weather monitoring due to the complexity of SEP event origins and propagation. We introduce SEPNET, an innovative multi-task neural network that integrates forecasting of solar flares and CME summary statistics into the SEP prediction model, leveraging their shared dependence on space-weather HMI active region patches (SHARP) magnetic field parameters. SEPNET incorporates long short-term memory and transformer architectures to capture contextual dependencies. The performance of SEPNET is evaluated on the state-of-the-art SEPVAL SEP dataset and compared with classical machine learning methods and current state-of-the-art pre-eruptive SEP prediction models. The results show that SEPNET achieves higher detection rates and skill scores while being suitable for real-time space weather alert operations.

How to cite: Chen, Y., Yu, Y., Zhao, L., Whitman, K., Manchester, W., and Gombosi, T.: SEPNET: a multi-task deep learning framework for SEP forecasting, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-11945, https://doi.org/10.5194/egusphere-egu26-11945, 2026.

14:09–14:12
|
EGU26-6232
|
Origin: ESSI2.2
Shaomeng Li, Allison Baker, and Lulin Xue

Many geoscientific datasets, such as those produced by climate and weather models, are stored in the NetCDF file format.  These datasets are typically very large and often strain institutional data storage resources. While lossy compression methods for scientific data have become more studied and adopted in recent years, most advanced lossy approaches do not work easily and/or transparently with NetCDF files. For example, they may require a file format conversion or they may not work correctly with “missing values” or “fill values” that are often present in model outputs.  While lossy quantization approaches such at BitRound and Granular BitRound have built-in support by NetCDF and are quite easy to use, such approaches are generally not able to reduce the data size as much as more advanced compressors (for a fixed error metric), like SPERR, ZFP, or SZ3.

We are particularly interested in reducing the data size of the CONUS404 dataset.  CONUS404 is a publicly available unique high-resolution hydro-climate dataset produced by Weather Research and Forecasting (WRF) Model simulations that cover the CONtiguous United States (CONUS) for 40 years at 4-km resolution (a collaboration between NSF National Center for Atmospheric Research the U.S. Geological Survey Water Mission Area). 

Here, we investigate one advanced lossy compressor, SPERR [1], together with its plugin for NetCDF files, H5Z-SPERR [2], in a Python-based workflow to compress and analyze CONUS404 data.  SPERR is attractive due to its support for quality control in terms of both maximum point-wise error (PWE) and peak signal-to-noise ratio (PSNR), enabling easy experimenting of storage-quality tradeoffs. Further, given a target quality metric, previous work has shown that SPERR likely produces the smallest compressed file size compared to other advanced compressors. It leverages the HDF5 dynamic plugin mechanism to enable users to stay in the NetCDF ecosystem with minimal to no change to existing analysis workflows, whenever a typical NetCDF file is able to be read. And, importantly for our work, the SPERR plugin supports efficient masking of “missing values,” which are common to climate and weather model output.  The support for missing values enables compression on many variables which are not naturally handled by other advanced compressors that rely on HDF5 plugins. Further, because H5Z-SPERR directly handles missing values, they can be stored in a much more compact format (and are restored during decompression), further improving compression efficiency. (Note that built-in NetCDF quantization approaches can work with missing values.) 

Our experimentation demonstrates the benefit of enabling advanced lossy (de)compression in the NetCDF ecosystem: adoption friction is kept at the minimum with little change to workflows, while storage requirements are greatly reduced.

 

[1] https://github.com/NCAR/SPERR

[2] https://github.com/NCAR/H5Z-SPERR

How to cite: Li, S., Baker, A., and Xue, L.: Application of advanced lossy compression in the NetCDF ecosystem for CONUS404 data, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-6232, https://doi.org/10.5194/egusphere-egu26-6232, 2026.

14:12–14:15
|
EGU26-6022
|
Origin: ESSI2.7
|
ECS
Harold Buitrago, Juan Contreras, and Florian Neumann

Numerical modeling is a fundamental tool for understanding physically driven processes in geosciences. In multiparametric settings, the Finite Element Method is widely used because it can accommodate irregular geometries and complex boundary conditions. However, this advantage critically depends on the quality of the computational mesh, which must faithfully represent geological features such as faults, stratigraphic interfaces, and wells. In practice, mesh generation remains a major bottleneck, requiring specialized expertise and significant manual effort. We present Geo2Gmsh, an automated, lightweight workflow built on Gmsh (Geuzaine & Remacle, 2009), that generates geological meshes directly from simple text‐based descriptions of topological elements, including surfaces, lines, and points. These elements correspond to geologically meaningful features, allowing users to define faults, horizons, wells, and domain boundaries in a transparent, reproducible, and solver‐independent way. The workflow is demonstrated using two contrasting case studies: (1) Ringvent, an active sill‐driven hydrothermal system in the Guaymas Basin, and (2) the Eastern Llanos Basin, a foreland basin in eastern Colombia. To evaluate solver compatibility, we solved the heat equation in SfePy (https://sfepy.org/doc-devel/index.html) using the Eastern Llanos Basin model as the computational domain. Although the simulation is illustrative and not calibrated to observations, it confirms that meshes produced by Geo2Gmsh can be readily incorporated into numerical solvers. By explicitly embedding wells, faults, and geological interfaces in the mesh, Geo2Gmsh enables boundary conditions to be applied directly to physically meaningful features and allows model outputs to be extracted along them, simplifying both model setup and post‐processing. Meshes can be exported in standard formats (e.g., VTK, MSH, and Exodus via meshio), ensuring broad interoperability. Overall, Geo2Gmsh provides a lightweight, scalable, and reproducible workflow that dramatically lowers the technical barrier to geological mesh generation. This contribution establishes a practical foundation for reproducible, open-source numerical modeling in geosciences, facilitating the integration of geological knowledge into high-fidelity computational simulations.

How to cite: Buitrago, H., Contreras, J., and Neumann, F.: Geo2Gmsh: A Scalable Workflow for Automated Mesh Generation of Geological Models Using Gmsh, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-6022, https://doi.org/10.5194/egusphere-egu26-6022, 2026.

14:15–14:18
|
EGU26-7344
|
Origin: ESSI1.1
Helen Buttery

Investigations have been carried out into the initiation of the Pangu weather model, initiating the model with both ERA5 data (on which it was trained) and with the Met Office’s Global UM model data. There are many consistent local biases at ground level between these two sets of initial conditions. The geographically local biases are not dissipated by the Pangu model with timestep but instead remain geographically fixed and gradually decrease with lead time. Whilst the Pangu model initiated with UM initial conditions remains further from the ERA5 truth than the ERA5-initiated Pangu model at all timesteps, it initially moves towards the ERA5 truth with timestep, as the geographically static differences in initiation decrease, before moving further away from the ERA5 truth as differences in large-scale systems begin to dominate.

Also investigated was the difference between the Pangu model 24-hour timesteps and 6-hour timesteps; it was found that the 6-hour timesteps were better able to reduce the geographically static initial differences than the 24-hour timesteps.

If time permits, a similar analysis will be made of the FastNet and GraphCast models.

How to cite: Buttery, H.: Investigations into the Reaction of the Pangu ML Weather Model to Different Initial Conditions, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-7344, https://doi.org/10.5194/egusphere-egu26-7344, 2026.

14:18–14:21
|
EGU26-16232
|
Origin: ESSI1.1
Vishnu Pm and Balaji Chakravarthy

Accurate high resolution wind field prediction is essential for wind resource as-
sessment, renewable energy planning, and regional weather analysis. Although
Numerical Weather Prediction (NWP) models such as the Weather Research
and Forecasting (WRF) model provide physically consistent wind forecasts, their
outputs often suffer from systematic biases arising from uncertainties in surface
characteristics, simplified physical parameterizations, and resolution limitations.
Furthermore, increasing model resolution to the kilometer scale significantly
raises computational cost. To address these challenges, this study presents a
machine learning–based framework for bias correction of WRF-simulated wind
fields over the Southern Tamil Nadu region, with particular focus on the Mup-
pandal wind farm area.
An extensive validation of WRF configurations was first performed using mul-
tiple physics scheme combinations and domain setups, evaluated against ERA5
reanalysis data. The optimal configuration was identified and used to gener-
ate three years (2023–2025) of wind simulations at 3 km × 3 km resolution.
Significant biases were observed in the raw WRF outputs, motivating the appli-
cation of an Artificial Neural Network (ANN) based bias correction approach.
A Random Forest algorithm was employed for feature selection, followed by
Principal Component Analysis (PCA) to reduce dimensionality while retaining
95% of the variance. A feedforward neural network with multiple hidden layers
was trained to correct the U10 and V10 wind components, with the hyperbolic
tangent activation function yielding the best performance. The bias-corrected
wind fields exhibited substantial improvement in mean and extremes, achieving low error metrics and
strong correlation with ERA5 data.
The results demonstrate that combining physically based NWP simulations with
machine learning driven bias correction provides an accurate and computation-
ally efficient approach for generating high-resolution wind fields. This hybrid
framework offers significant potential for wind energy assessment and localized
meteorological applications in data-sparse regions.

How to cite: Pm, V. and Chakravarthy, B.: Bias Correction of Numerical Weather PredictionWind Fields in Southern Tamil Nadu RegionUsing Machine Learning Techniques, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-16232, https://doi.org/10.5194/egusphere-egu26-16232, 2026.

14:21–14:24
|
EGU26-3363
|
Origin: GI4.5
|
ECS
Odysseas Gkountaras, Chryssoula Georgakis, Thiseas Velissaridis, and Margarita Niki Assimakopoulos

Characterizing the thermal state of urban surfaces is fundamental for mitigating the impacts of the Surface Urban Heat Island (SUHI) effect. This study presents an intensive in-situ thermal infrared monitoring campaign in the high-density urban core of Athens, Greece. Utilizing a calibrated handheld TIR sensor (7.5–14 μm), surface temperatures were recorded across strategic locations in the center of Athens during hot weather conditions. The methodology emphasizes the critical role of material-specific parameterization, where thermographic data were post-processed to account for emissivity (ε) variations and surface temperature, ensuring high-fidelity measurements.

Experimental results reveal extreme thermal stress, with maximum surface temperatures reaching 56.0°C on conventional paving materials, while the mean ambient air temperature was close to 35.0°C during peak solar hours (13:00–18:00LT). Spatial analysis and visualization of the results were performed using QGIS, correlating thermal signatures with urban geometry, shading conditions, and vegetation density. The aim of this study was to highlight the significant cooling potential of specific urban materials and nature-based solutions.

How to cite: Gkountaras, O., Georgakis, C., Velissaridis, T., and Assimakopoulos, M. N.: In-situ Thermal Infrared Monitoring in an Urban Area: A Case Study of Micro-scale Thermal Transitions during Hot Weather Conditions in Athens, Greece., EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3363, https://doi.org/10.5194/egusphere-egu26-3363, 2026.

14:24–14:27
|
EGU26-13611
|
Origin: GI4.5
|
ECS
Francesco Rossi, Raffaele Casa, Luca Marrone, Saham Mirzaei, Simone Pascucci, and Stefano Pignatti

Quantifying soil properties such as Soil Organic Carbon (SOC), texture, and Calcium Carbonate (CaCO3) is essential for assessing soil health and ensuring food security. While Visible, Near Infrared, and Short Wave Infrared (VSWIR) remote sensing is a standard operational tool, the Longwave Infrared (LWIR, 8-14 μm) offer complementary information on mineralogy and moisture that are still not yet fully explored for this specific application. This study investigates the synergy between VSWIR and LWIR data that will be available with future hyperspectral satellite missions. Among them, the European Space Agency's Copernicus Expansion missions that will add to the EO capacity the Hyperspectral Imaging Mission for the Environment (CHIME) and Land Surface Temperature Monitoring (LSTM) mission. Alongside are the NASA's Surface Biology and Geology (SBG and SBG-TIR) missions.

The research focuses on Jolanda di Savoia (Italy), an agricultural landscape resulting from land reclamation projects in the late 19th century. Ground truth data were collected during a field campaign on June 22, 2023, providing 59 topsoil samples further analysed for SOC, texture, and CaCO3. Field campaign was coincident with an airborne survey carried out with the LWIR Hyperspectral Thermal Emission Spectrometer (HyTES) sensor. HyTES captured data across 256 spectral bands from 7.5 to 11.5 μm, providing a pixel size of approximately 2.3 meters.

To evaluate the multi-frequency potential, we developed a workflow combining a soil composite from PRISMA (VSWIR) satellite time-series with simulated SBG-TIR (LWIR) data. The SBG-TIR simulation chain included as input a surface emissivity map derived from the airborne HyTES survey. To cover the LWIR wide spectral range (up to 12 µm), the emissivity spectrum was extended using an autoencoder neural network procedure trained on the ECOSTRESS Soil Spectral Library. Top-Of-Atmosphere (TOA) radiance was then simulated using the Radiative Transfer for the TIROS Operational Vertical Sounder (RTTOV-14) model, incorporating the optical depth and cloud/aerosol optical properties coefficients specific to SBG-TIR. Furthermore, these simulated data were atmospherically corrected to produce the target satellite emissivity products according to the TES algorithm.

Soil properties prediction models were developed using supervised machine learning algorithms. We benchmarked two scenarios: 1) the proposed combined approach using PRISMA and the simulated SBG-TIR L2 emissivity product; and 2) a VSWIR-only approach using PRISMA. A quantitative assessment by 10-fold cross-validation using common literature metrics (R², RMSE, RPD) highlighted the benefits of the multi-sensor approach. For SOC retrieval, the standalone VSWIR (PRISMA) model yielded an R2 of 0.55 (RPD = 1.5), while the synergistic integration of PRISMA with simulated SBG-TIR data improved the retrieval accuracy, reaching an R2 of 0.65 and increasing the RPD to 1.69. This work indicates that, on the agricultural test site of Jolanda di Savoia, the combined use of SVWIR and LWIR spectral range slightly improves the SOC retrieval. Further validation across diverse agricultural scenarios will be essential to test the real advantage of combining next-generation imaging spectroscopy missions.

How to cite: Rossi, F., Casa, R., Marrone, L., Mirzaei, S., Pascucci, S., and Pignatti, S.: Evaluating the combined potential of VSWIR and Thermal Infrared data for soil characterisation., EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-13611, https://doi.org/10.5194/egusphere-egu26-13611, 2026.

Coffee break
Please check your login data.