ESSI2.8 | Advancing Geoscience Research and Visualization Through Robust and User-Friendly Software: Case Studies, Applications, and Best Practices
EDI PICO
Advancing Geoscience Research and Visualization Through Robust and User-Friendly Software: Case Studies, Applications, and Best Practices
Convener: Kostas Leptokaropoulos | Co-conveners: Stefania Gentili, Angeliki Adamaki, Monika Staszek, Raquel Felix, Tobias Kerzenmacher, Christof Lorenz
PICO
| Mon, 04 May, 08:30–12:30 (CEST), 16:15–18:00 (CEST)
 
PICO spot 1b
Mon, 08:30
Across the geosciences, from seismology and geophysics to hydrology, environmental sciences, and beyond, research increasingly depends on sophisticated software for data analysis, modelling, and interpretation. The rapid development and diversification of these tools create exciting opportunities, but also present challenges in maintaining code quality, ensuring ease of use and comprehensive documentation, achieving sustainability and reproducibility, and enabling seamless interoperability across datasets and disciplines. Addressing these challenges is essential for producing reliable, reusable, and trustworthy scientific results.
This PICO session invites contributions that present software tools, workflows, and platforms that have advanced geoscience research. We welcome:
● New or updated toolboxes, software packages, and workflows that enhance data access, analysis, visualization, modelling, or interpretation in geosciences.
● Case studies demonstrating real-world impact of software in research and operations.
● Methodologies for software testing, continuous integration, versioning, upgrades, deployment, and sustainability.
● Tools and best-practices for visualizing complex, high-dimensional and high frequency data
● Developments of open source visualization and exploration techniques for earth system science data
● Interoperability solutions that enable tools and datasets to work together across disciplines.
Sharing of your resources, reusable workflows, and best practices is strongly encouraged to raise the overall quality, transparency, and reusability of research software. Live demonstrations, videos, and interactive examples are welcome (supplementary materials can be hosted on the EGU26 platform for access after the conference).
We warmly invite geoscientists, software developers, FAIR and Open Science ambassadors and researchers to participate in this session and share their experiences, insights, and solutions. Join us to help build a stronger, more collaborative, and future-ready geoscience software ecosystem!

PICO: Mon, 4 May, 08:30–18:00 | PICO spot 1b

PICO presentations are given in a hybrid format supported by a Zoom meeting featuring on-site and virtual presentations. The button to access the Zoom meeting appears 15 minutes before the time block starts.
Chairpersons: Monika Staszek, Tobias Kerzenmacher, Kostas Leptokaropoulos
08:30–08:35
08:35–08:37
|
PICO1b.1
|
EGU26-16635
|
On-site presentation
Christian Meeßen, Matthias Volk, Nils Brinckmann, Joachim Saul, and Frederik Tilmann

The moment tensor of an earthquake describes the force couples acting at the source location as a symmetric 3x3 matrix, and provides information on orientation and slip direction of the fault failure. Shear failure is represented by a double-couple, for which the trace of the moment tensor is zero and the intermediate eigenvalue is zero, with the corresponding eigenvector termed the neutral axis. In the seismology literature, this tensor is typically visualized in a two-dimensional beachball diagram, a lower-hemisphere projection of a three-dimensional sphere split into four quadrants by two perpendicular great circles oriented according to the eigenvectors of the matrix, such that the neutral axis points to the intersection of the two circles, and the other two eigenvectors point to the centre of each quadrant.

In this contribution we present a new browser-based tool to visualize the moment tensors of earthquakes not just as a two-dimensional projection but as three-dimensional objects. Concretely we show an earthquake as a magnitude-scaled sphere, textured according to its moment tensor, and located at its hypocenter. We provide options to visualize only the double-couple part of the moment tensor, to color the spheres by depth only and to also show earthquakes for which no moment tensor solution has been derived. Additional context is provided by the SLAB2 model of the subduction zones of the earth as well as raster and vector map layers loaded via OGC compliant APIs (WMS/WFS).

The tool runs entirely in a user’s web browser and fetches earthquake data from an FDSN Web Services (FDSNWS) events endpoint in QuakeML format, thus using a standardized API widely-used in the seismology community. While our deployment is currently integrated into the GFZ Earthquake Explorer, using the GEOFON network, it is also compatible with other station networks that provide data via FDSNWS. The visualization is built using the open-source library CesiumJS with custom WebGL shaders that implement the coloring of the spheres as beachballs.

The fact that no specialized software packages are needed also makes the tool suitable for a more general audience beyond scientists from the seismology community.

How to cite: Meeßen, C., Volk, M., Brinckmann, N., Saul, J., and Tilmann, F.: Earthquake Explorer goes 3D: A Browser-Based Tool for Interactive Earthquake Visualization, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-16635, https://doi.org/10.5194/egusphere-egu26-16635, 2026.

08:37–08:39
|
PICO1b.2
|
EGU26-2942
|
ECS
|
On-site presentation
Raquel Felix, Constance Chua, Anawat Suppasri, Ignatius Ryan Pranantyo, and Endra Gunawan

More than 80% of the world’s international trade is conducted via maritime transport, with ports serving as critical gateways for the transfer of goods between sea and land. In this research, we introduce a user-friendly tsunami risk analysis application for ports, developed with a graphical user interface. The application utilises existing published fragility curves (for structural damage, recovery potential for production capacity, and physical loss estimation) to perform the analyses and generate a risk report. The required inputs in this application are a tsunami inundation map and a shapefile containing the polygonal structures of the port, with an attribute table that includes the industry type identification. The main output of the application is a PDF report containing the probability distribution results across different tsunami inundation depth ranges, presented through inundation maps, summary tables, and bar plots. All raw image files used in the report, as well as the raw calculations in CSV format, are also included as part of the output. Preliminary testing of the application has been conducted to forecast tsunami impacts at Cilegon Port in West Java, Indonesia, under a worst-case scenario involving a Mw 8.9 earthquake from a rupture along the Sunda Megathrust, located southwest of the Sunda Strait. Cilegon Port lies on the north-western coast of Java Island, facing the strait. We will present the latest progress in developing our risk assessment application. This research is funded by The European Commission (the Horizon Europe scheme) and the UK Research and Innovation (EPSRC contract: EP/Z001080/1).

How to cite: Felix, R., Chua, C., Suppasri, A., Pranantyo, I. R., and Gunawan, E.: Developing a User-Friendly Tsunami Risk Assessment Tool for Ports: Application to Cilegon Port, Indonesia, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-2942, https://doi.org/10.5194/egusphere-egu26-2942, 2026.

08:39–08:41
|
PICO1b.3
|
EGU26-18279
|
On-site presentation
Monika Staszek, Jan Wiszniowski, Paulina Kucia, Jakub Kokowski, Grzegorz Lizurek, and Łukasz Rudziński

Imaging of underground structures is a primary objective of all geophysical methods. In areas exhibiting natural or anthropogenic seismic activity, recorded and accurately located earthquakes can be used to map the faults that host them. Initial images can be further improved by the use of relative relocation techniques and seismic events with highly similar waveforms (multiplets).

In this work, we present EqSimage, a Python package designed to identify multiplets, perform their relative relocation using double-difference technique, and delineate potential fault planes. The package supports both continuous and triggered seismic data, which can be read directly from disk or downloaded from data centers using FDSN web services or ArcLink. Signal similarity is evaluated through cross-correlation of three-component seismic data. To distinguish groups of similar events, several clustering algorithms are available, including SciPy hierarchical clustering with cross-correlation coefficient as a distance metric and clustering based on pick times only. Subsequently, the double-difference relocation of all identified multiplets is carried out using the original hypoDD software by Waldhauser (2001), version 2.1b. Finally, the relocated events are divided into groups and best-fitting plane is determined for each group using the FaultNVC software (Sawaki et al., 2025).

EqSimage performs all processing steps automatically based on a single configuration file. The output includes a relocated earthquake catalog in QuakeML format and estimated fault-plane orientations. Additionally, several visualization tools are provided at individual stages of the workflow in order to assess the performance of configuration parameters. These tools include visualization of identified multiplets (waveforms and relocated hypocenters), cross-correlation matrices, relocated events, and inferred fault planes.

We demonstrate the capabilities of EqSimage using several datasets representing different types of anthropogenic seismicity, including injection induced seismicity, reservoir triggered seismicity and mining induced seismicity. Case studies are presented from The Geysers geothermal field (California, USA), the Song Tranh 2 water reservoir (Vietnam), and a seismically active underground mine in Poland.

 

References:

Waldhauser, Felix, hypoDD -- A program to compute double-difference hypocenter locations, U.S. Geological Survey Open-File Report 01-113, 2001.

Sawaki, Y., Shiina, T., Sagae, K., Sato, Y., Horikawa, H., Miyakawa, A., Imanishi, K., & Uchide, T. (2025). Fault Geometries of the 2024 Mw 7.5 Noto Peninsula Earthquake From Hypocenter-Based Hierarchical Clustering of Point-Cloud Normal Vectors. J. Geophys. Res.: Solid Earth, 130(4), e2024JB030233.

This research was supported by research project no. 2022/45/N/ST10/02172, funded by the National Science Centre, Poland, under agreement no. UMO-2022/45/N/ST10/02172. This work was also partially supported by a subsidy from the Polish Ministry of Education and Science for the Institute of Geophysics, Polish Academy of Sciences.

How to cite: Staszek, M., Wiszniowski, J., Kucia, P., Kokowski, J., Lizurek, G., and Rudziński, Ł.: EqSimage: A Python Package for Fault Imaging from Earthquake Similarity, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-18279, https://doi.org/10.5194/egusphere-egu26-18279, 2026.

08:41–08:43
|
EGU26-18627
|
ECS
|
Virtual presentation
Selina Bonini, Oona Scotti, Alessandro Valentini, Francesco Visini, Bruno Pace, Giulia Tartaglia, Giulio Viola, and Gianluca Vignaroli

Probabilistic Fault Displacement Hazard Analysis (PFDHA) quantifies the probability and the expected amount of coseismic displacement associated with the activity of Active and Capable Faults (ACFs) at a given site. Common PFDHA approaches distinguish between primary on-fault displacement and distributed off-fault ruptures occurring on secondary faults or fractures and typically rely on empirical scaling relationships calibrated for specific earthquake magnitudes. However, these methods are often calibrated for specific tectonic or kinematic settings and lack readily available computational tools. Moreover, available PFDHA approaches do not commonly include the possibility to investigate the floating rupture mechanism, i.e., the possibility that surface ruptures involve only portions of the full fault trace.

To overcome these limitations, we developed FaulTED, a new user-friendly MATLAB-based code for PFDHA that integrates a comprehensive set of published models, including magnitude–frequency distributions, fault scaling relationships, and surface rupture probability functions. The toolkit comprises two main modules: (i) a site-specific hazard curve calculator and (ii) a fault-specific hazard map generator for user-defined return periods. Both modules explicitly account for on-fault and distributed off-fault ruptures and incorporate the floating rupture approach commonly adopted in probabilistic seismic hazard analysis.

The modular architecture of FaulTED allows users to flexibly select and compare alternative models through a structured input file, enabling sensitivity analyses and systematic exploration of epistemic uncertainties. FaulTED is designed as a user-oriented platform to support infrastructure planning in regions affected by ACFs.

How to cite: Bonini, S., Scotti, O., Valentini, A., Visini, F., Pace, B., Tartaglia, G., Viola, G., and Vignaroli, G.: FaulTED: a new user-friendly MATLAB-based code to assess probabilistic fault displacement hazard, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-18627, https://doi.org/10.5194/egusphere-egu26-18627, 2026.

08:43–08:45
|
PICO1b.4
|
EGU26-13691
|
On-site presentation
GITpy: An open-source, data-driven framework for robust ground-motion parameter estimation across tectonic settings
(withdrawn)
Maria D'Amico, Paola Morasca, Spallarossa Daniele, Bindi Dino, Picozzi Matteo, Oth Adrien, and Pacor Francesca
08:45–08:47
|
PICO1b.5
|
EGU26-2634
|
On-site presentation
Maurizio Battaglia and Marco Bagnardi

Ground deformation can arise from tectonic and volcanic processes as well as from human activities, such as subsurface fluid withdrawal. Mathematical models describing crustal deformation in response to these processes are essential for characterizing driving mechanisms, constraining source location, size, orientation, and volume change. Models provide critical information for hazard forecasting and mitigation, assessing anthropogenic environmental impacts, land-use planning, and related applications.

In this context, analytical kinematic models remain essential tools for the rapid interpretation of deformation, particularly in operational and time-sensitive settings.

dMODELS is an open-source MATLAB environment designed primarily to model and interpret crustal deformation associated with volcanic activity and active fault systems by non-linear inversion of GNSS, InSAR, and tilt observations. The software consolidates a suite of analytical kinematic source models into a single, end-to-end framework that is modular (pre-processing → inversion → post-processing), consistent (standardized formulations across models), transparent (fully documented scripts with examples), and cross-platform (Windows and Linux). Although most analytical formulations originate from established literature, several equations have been verified, reformulated, standardized to ensure internal consistency, and validated against corresponding finite-element solutions.

The platform runs on Windows and Linux systems and is structured to support end-to-end modeling workflows, including: (a) preprocessing tools for data selection and formatting, (b) non-linear inversion routines for estimating source parameters and associated uncertainties, and (c) post-processing utilities for generating publication-ready figures. Each module is accompanied by examples and documentation, with a full user manual in release by the U.S. Geological Survey.

The deformation sources implemented in dMODELS are kinematic representations, including pressurized cavities (spherical, spheroidal, or penny-shaped) and planar dislocations embedded in a homogeneous, isotropic elastic half-space. These constructs do not represent physical reservoirs directly but approximate the stress and strain fields produced by real subsurface processes. As such, dMODELS allows users to constrain source geometry, location, volume change, and stress distribution, although total reservoir volume and fluid properties remain unresolved.

Despite the inherent simplifications of analytical models, their rigorous use, combined with high-quality geodetic datasets, provides powerful insights into active deformation sources and supports both research and monitoring applications. By making robust, reliable, and independently verified modeling tools readily accessible, dMODELS supports reproducible analyses and enables their use by a broader scientific and operational community.

How to cite: Battaglia, M. and Bagnardi, M.: dMODELS: An Open-Source, Modular MATLAB Environment for Geodetic Deformation Analysis, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-2634, https://doi.org/10.5194/egusphere-egu26-2634, 2026.

08:47–08:49
|
PICO1b.6
|
EGU26-2544
|
On-site presentation
|
Kostas Leptokaropoulos, Shinji Toda, Tom Garth, Kaede Yoshizawa, Ross Stein, Ryan Gallacher, Volkan Sevilgen, and Jian Lin

We introduce an integrated workflow in MATLAB that combines Coulomb 4.0, a major revision of the widely used Coulomb, stress interaction and deformation application, with the ISC Earthquake Toolbox, which provides direct access to the International Seismological Centre (ISC) Bulletin. This interoperability enables researchers to seamlessly transition from global earthquake data acquisition to stress interaction analysis within a single environment.

The workflow begins by querying and importing earthquake catalogs from the ISC Bulletin using the toolbox’s GUI, allowing selection by time, region, depth and magnitude. These events can then be visualized in 3D and cross-section views, and their parametric data, including moment tensors, are used to define fault geometries in Coulomb 4.0. New Coulomb features, such as automatic fault parameter scaling from magnitude and interactive fault editing, streamline the setup of rupture planes based on ISC-reported events. Stress transfer calculations and deformation modelling can then be performed, with results displayed alongside seismicity overlays for comprehensive interpretation.

This combined approach enhances reproducibility and efficiency by eliminating manual data handling and enabling dynamic visualization of both seismicity and modelled stress/deformation changes. We demonstrate the workflow using recent seismic sequences, highlighting its potential for earthquake interaction studies, hazard assessment, and educational applications. By bridging global seismic data with advanced stress modelling, this interoperability represents a significant step toward integrated geoscience software ecosystems.

The ISC Earthquake Toolbox can be freely accessed from:

  • GitHub (https://github.com/tomgarth/ISC_Earthquake_Toolbox) and
  • File Exchange (https://www.mathworks.com/matlabcentral/fileexchange/167786-isc-earthquake-toolbox?s_tid=srchtitle)

Coulomb 4.0 can be freely accessed from:

  • GitHub (https://github.com/YoshKae/Coulomb_ver4) and
  • temblor.net/coulomb/.

How to cite: Leptokaropoulos, K., Toda, S., Garth, T., Yoshizawa, K., Stein, R., Gallacher, R., Sevilgen, V., and Lin, J.: Integrated Workflow for Earthquake Stress Modelling and Seismicity Analysis Using Coulomb 4.0 and the ISC Earthquake Toolbox, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-2544, https://doi.org/10.5194/egusphere-egu26-2544, 2026.

08:49–08:51
|
PICO1b.7
|
EGU26-7901
|
On-site presentation
Stefania Gentili, Letizia Caravella, and Giuseppe Davide Chiappetta

In this work, we present a new and improved version of the NExt STrOng Related Earthquake (NESTORE) software, originally released as NESTOREv1.0 and publicly available as a MATLAB-based package. The original version of NESTORE was specifically designed to forecast the occurrence of strong aftershocks in the first few hours following a mainshock, providing a valuable tool for short-term seismic hazard assessment.

 

The newly developed version introduces several methodological and computational improvements aimed at increasing the robustness and reliability of the forecasting framework. Among the main upgrades is the integration of the REPENESE (RElevant features PErcentage class weighting NEighborhood detection SElection) algorithm, an advanced outlier detection method explicitly designed to handle class imbalance and skewed datasets, which are characteristic of the seismicity features we used. This integration enables a more effective identification and treatment of anomalous events, thereby improving classifier performance.

 

In addition, the new version implements a k-fold cross-validation strategy to estimate model performance. This approach allows a more stable and unbiased evaluation of predictive capabilities compared to single-split validation methods, especially with limited or heterogeneous data. Overall, the combination of these enhancements results in a more flexible, accurate, and reliable tool for the analysis of earthquake clusters and the early forecasting of strong aftershocks.

 

Funded within the RETURN Extended Partnership and received funding from the European Union Next-Generation EU (National Recovery and Resilience Plan—NRRP, Mission 4, Component 2, Investment 1.3—D.D. 1243 2/8/2022, PE0000005) and by the grant “Progetto INGV Pianeta Dinamico: NEar real-tiME results of Physical and StatIstical Seismology for earthquakes observations, modelling and forecasting (NEMESIS)” - code CUP D53J19000170001 - funded by Italian Ministry MIUR (“Fondo Finalizzato al rilancio degli investimenti delle amministrazioni centrali dello Stato e allo sviluppo del Paese”, legge 145/2018).

 

How to cite: Gentili, S., Caravella, L., and Chiappetta, G. D.: An enhanced version of the NESTORE software for strong aftershock forecasting, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-7901, https://doi.org/10.5194/egusphere-egu26-7901, 2026.

08:51–08:53
|
PICO1b.8
|
EGU26-2972
|
On-site presentation
Fiona Zarodova, Andrew Redfearn, and Kostas Leptokaropoulos

SHAppE (Seismic HAzard Parameters Evaluation app) is a MATLAB-based App for time-dependent probabilistic seismic hazard analysis, designed to simplify data access, analysis and visualization workflows. Since its release in April 2025, SHAppE has been well received by researchers and educators, providing an intuitive interface for complex hazard evaluations without requiring advanced programming skills. 

While the initial version was fully functional, community feedback over the past year has been invaluable in refining the app. Over the past year, more than 50 issues (including bug fixes, usability improvements, and feature enhancements) were addressed, many reported directly by users. This collaborative process has led to significant upgrades, such as improved data selection workflows, expanded parameter set options, and the ability to extract the complete set of custom filters and applied parameters, further strengthening reproducibility. 

SHAppE also integrates with external sources like the ISC Earthquake Toolbox for MATLAB, enabling direct access to global earthquake bulletins without significant preprocessing. The community contributions have been critical to the App improvement and we encourage continued feedback to drive future development.  

SHAppE is freely available via: 

  • GitHub (https://github.com/mathworks/Seismic-HAzard-Parameters-Evaluation-Interface-SHAppE) and  
  • File Exchange (https://www.mathworks.com/matlabcentral/fileexchange/180879-shappe-seismic-hazard-parameters-evaluation-interface) 

 

How to cite: Zarodova, F., Redfearn, A., and Leptokaropoulos, K.: SHAppE After One Year: Community Feedback Driving Seismic Hazard Analysis Forward , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-2972, https://doi.org/10.5194/egusphere-egu26-2972, 2026.

08:53–08:55
|
PICO1b.9
|
EGU26-19638
|
ECS
|
On-site presentation
Donato Talone, Luca De Siena, and Nicholas Rawlinson

Recent improvements in seismic data acquisition (such as enhanced network coverage, near-real-time analysis, and machine learning data processing) have significantly increased the availability of data. However, due to a lack of time and/or analysts, they are often partially processed and not utilized to their full potential. The rapid development of new tools for analyzing earthquake records can help, but may also decrease the stability achieved from the widespread use of tried and tested software. Additionally, robust codes developed in the literature often rely on efficient but rigid programming languages, such as Fortran, which may not accommodate new and variable data formats. In this context, it becomes crucial to revitalize and enhance existing software by making it more accessible and user-friendly for a broader community across various applications.

One of the possible solutions for addressing this issue is the development of Graphical User Interfaces (GUIs) for terminal-only software. Here, we developed a Python GUI designed to simplify tomography applications using local earthquakes, based on FMTOMO (Rawlinson and Urvoy, 2006), an iterative non-linear Fast-Marching seismic tomography code. Despite its well-documented usage, FMTOMO suffers from the requirement for strictly formatted input files, which are not compatible with the various storage formats commonly used for seismological data. We leverage the toolbox Obspy (Beyreuther et al., 2010) to enable the reading and/or downloading of seismic data in multiple formats and convert it to the FMTOMO format.

The graphical interface, called G-LEFMTOMO, also facilitates the setup process for both the direct and inverse problems by automating repetitive steps that were previously manual. This includes the creation of trade-off curves for tuning damping and smoothing parameters. Additionally, we implemented a feature for the pre-analysis of the source-receiver distribution by generating seismic ray hit-maps before the full tomography process. We also aim to simplify the output format and visualization to facilitate easy sharing of results.

G-LEFMTOMO enables users to manage the entire workflow, from data input to the visualization of tomography models, all within a single interface. For more complex configurations or specific requirements, users can still run the original FMTOMO code through the terminal, allowing the GUI to be utilized for only part of the project if desired.

The introduction of graphical user interfaces in the software community enables scientists to access a wider range of software for data analysis, overcoming the limitations of complex and inflexible software. This development not only expands the resources available to researchers but also enhances the value of raw data, helping to prevent its under-utilization.

References

  Rawlinson, N. and Urvoy, M.: Simultaneous inversion of active and passive source datasets for 3-D seismic structure with application to Tasmania, Geophys. Res. Lett., 33, L24313, https://doi.org/10.1029/2006GL028105, 2006.
   Beyreuther, M., Barsch, R., Krischer, L., Megies, T., Behr, Y., and Wassermann, J.: ObsPy: A Python Toolbox for Seismology, Seismological Research Letters, 81, 530–533, https://doi.org/10.1785/gssrl.81.3.530, 2010.

How to cite: Talone, D., De Siena, L., and Rawlinson, N.: G-LEFMTOMO: a Graphical User Interface for performing Local Earthquake Tomography using the FMTOMO code, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-19638, https://doi.org/10.5194/egusphere-egu26-19638, 2026.

08:55–08:57
|
PICO1b.10
|
EGU26-624
|
ECS
|
On-site presentation
Accessible Recovery of Vintage Seismic Sections: From Paper to Migrated and Depth-Converted Data
(withdrawn)
Alejandro Pertuz, Mª Isabel Benito, Pablo Suarez-Gonzalez, Pilar Llanes, and Martín García-Martín
08:57–10:15
Coffee break
Chairpersons: Stefania Gentili, Christof Lorenz, Raquel Felix
10:45–10:47
|
PICO1b.1
|
EGU26-22713
|
ECS
|
On-site presentation
Christopher Ahn, Juan Ruiz, Jorge Gacitua, Alexandra Diehl, and Renato Pajarola

Ensemble prediction systems are central to modern numerical weather forecasting, providing distributions of plausible atmospheric outcomes rather than single deterministic trajectories. While these ensembles are essential for assessing uncertainty, interactive exploration of ensemble structure, extremes, and spatio-temporal variability remains challenging in practice. Existing workflows rely predominantly on server-centric pipelines—typically Python/Xarray/Dask stacks or VTK-based backends—where computation and rendering occur remotely and the browser functions primarily as a thin client. These architectures introduce latency, require substantial data staging, and often collapse ensembles into low-order summaries that obscure multimodality and extremes.

We present NextSembles, a browser-native system for interactive ensemble uncertainty analysis that relocates data access, statistical computation, and visualization entirely to the client. NextSembles compiles the NetCDF-C library to WebAssembly, enabling standards-compliant NetCDF ingestion directly in the browser. Ensemble variables are decoded into contiguous slabs within WebAssembly linear memory and exposed as typed array views. Statistical reductions—including mean, variance, standard deviation, and probability-of-exceedance—are computed using C/WASM kernels operating directly on this memory, avoiding server round-trips and intermediate data representations.

To maintain responsiveness on large ensemble fields, NextSembles employs a tile-based execution model that subdivides spatial slices into latency-bounded units of work. Tile updates are propagated incrementally to the renderer, enabling progressive visual feedback while preserving full-resolution views. Visualization is performed using VTK-WASM (with WebGPU when available and WebGL fallback), supporting interactive exploration of spatial slices alongside coordinated temporal, distributional, and member-comparison views. A multitrack uncertainty timeline facilitates rapid identification of forecast periods exhibiting elevated ensemble spread.

We evaluate NextSembles on COSMO-1e/2e ensemble datasets, measuring kernel-level performance, end-to-end interaction latency, and data staging costs. Results show that browser-resident C/WASM reducers sustain sub-200 ms interaction latency for common analysis tasks on commodity hardware, enabling responsive, distribution-aware ensemble exploration without reliance on HPC backends or Python services.

NextSembles demonstrates that revisiting the execution model of ensemble uncertainty analysis enables transparent, low-latency workflows directly in the browser, complementing existing server-centric approaches.

How to cite: Ahn, C., Ruiz, J., Gacitua, J., Diehl, A., and Pajarola, R.: Revisiting the Execution Model of Ensemble Uncertainty Analysis in the Browser, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-22713, https://doi.org/10.5194/egusphere-egu26-22713, 2026.

10:47–10:49
|
PICO1b.2
|
EGU26-17193
|
On-site presentation
Wolfgang Schwanghart, Boris Gailleton, Anna-Lena Lamprecht, Dirk Scherler, and Kearney William

Many numerical models depend critically on digital elevation models (DEM). Hydrodynamic models, landslide susceptibility models, or glacial models, for example, require DEMs as input. The quality of the DEMs and DEM preprocessing are thus vital for many models. Hydrodynamic simulations are highly sensitive to DEM errors and artefacts, while other models require smoothing to ensure numerical stability. Model outputs are likewise strongly controlled by topography: flood extents depend on topographic gradients and surface elevations relative to river channels, glacier extent is constrained by topographic height and confinement, and geomorphic models generate time-varying DEMs that document topographic changes over time. These close links suggest that model preparation, execution, and analysis should be conducted within a unified terrain-analysis environment.

TopoToolbox is a terrain analysis framework originally developed in MATLAB and now available in Python and R. We demonstrate how simulation software can be integrated into TopoToolbox by various means including via the C API of libtopotoolbox or higher level interfaces provided by the language-specific implementations of TopoToolbox. Interfacing with TopoToolbox enables seamless DEM preprocessing alongside visualization and analysis of model outputs. Working within a single, specialized environment simplifies workflows, improves reproducibility, and enhances the usability and dissemination of modeling software.

How to cite: Schwanghart, W., Gailleton, B., Lamprecht, A.-L., Scherler, D., and William, K.: Integrating numerical models and terrain analysis in TopoToolbox, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-17193, https://doi.org/10.5194/egusphere-egu26-17193, 2026.

10:49–10:51
|
PICO1b.3
|
EGU26-1798
|
ECS
|
On-site presentation
Jeran Poehls, Lazaro Alonso, and Nuno Carvalhais

The scale and complexity of multidimensional scientific data, particularly in Earth sciences, necessitate the distillation of all that information into a palatable visual form. This process is most efficient when visualizing and interacting with the data in its native higher dimensional form. Despite their inherent 3D and 4D structure, these data are frequently reduced to static 2D plots or animated sequences, obscuring critical spatial relationships, temporal dynamics, and emergent patterns.
3D or 4D visualization is constrained to standalone applications or niche GPU-powered libraries. These options provide powerful capabilities but require significant software installation, specialized workflows, and domain-specific expertise, creating a high barrier to entry that deters many researchers. 

We introduce Browzarr, an open-source framework designed to facilitate convenient multidimensional data exploration from any web connected device. With native support for Zarr and NetCDF, users can immediately dive into their data with no additional configuration, installs, or dependencies. A modular architecture and open-source design ensures adaptability to evolving research needs, enabling seamless integration with emerging data formats, analytical workflows, and user-driven extensions.

How to cite: Poehls, J., Alonso, L., and Carvalhais, N.: Browzarr–Interactive Viewing and Inference of Multidimensional Datasets, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1798, https://doi.org/10.5194/egusphere-egu26-1798, 2026.

10:51–10:53
|
PICO1b.4
|
EGU26-17152
|
ECS
|
On-site presentation
Maximilian Söchting and Miguel D. Mahecha

As Earth system data streams and models grow larger, more complex and higher-dimensional, the demand for capable data visualization and exploration tools increases. While specialized data cube visualization tools have been developed in recent years, they typically rely on technical compromises to address the data access problems posed by large data sets. Some of the existing tools provide support for visualizing arbitrary 3D data chunks by making parts of the cube transparent, commonly known as volume or voxel rendering, i.e., “looking inside the data set”. This voxel rendering can communicate spatiotemporal patterns effectively and has a much higher information density of previous data cube visualization approaches, but is computationally demanding and scales strongly with the size of the visualized data set. 

Here we present an interactive voxel visualization for large Earth system data cubes, integrated into the existing Lexcube.org data cube visualization and its open-source Python package. The voxel visualization allows to highlight and visualize value ranges based on thresholds, creating "voxel clouds" in three-dimensional space-time. Additionally, users can highlight extreme values by selecting a quantile range, based on deviations from the mean seasonal cycle and other definitions of “extreme”. To enable this visualization for large data sets, we developed a novel lossy compression algorithm based on variable quantization of 3D blocks that significantly reduces both the required VRAM for the visualization and the computational effort for the ray tracing. The algorithm preserves high information content by encoding 3D chunks of high variance at a high resolution, while chunks of nearly uniform values get compressed a lot, respecting a user-set, configurable error metric. This way, the scientific accuracy of the visualization is guaranteed and quantified, while enabling the previously impossible voxel visualization and exploration of large data sets.

Based on the previous Lexcube software, the software stays compatible with a wide range of desktop and mobile devices by relying on WebGL 2 instead of adopting the modern successor WebGPU. Because the data backbone relies on Xarray, any gridded three-dimensional Zarr, NetCDF and other supported data sets can be ingested and visualized with our software - on Lexcube.org or using our open-source package for Jupyter notebooks, available on Github and PyPi.

How to cite: Söchting, M. and Mahecha, M. D.: Interactive Voxel Visualization of Large Earth System Data Cubes, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-17152, https://doi.org/10.5194/egusphere-egu26-17152, 2026.

10:53–10:55
|
PICO1b.5
|
EGU26-19078
|
On-site presentation
Matthias Pohl and Joshua Reibert

The exponential growth in data generation across scientific domains has amplified the critical role of data science in extracting actionable insights from complex datasets (Chen, 2012; Müller, 2018; Wamba, 2017; Yin, 2015). Traditional data science methodologies, such as the Cross-Industry Standard Process for Data Mining (CRISP-DM) and the Knowledge Discovery in Databases (KDD) process, provide structured frameworks for data processing and model development (Fayyad, 1996; Shearer, 2000). However, these approaches often treat visualization as a terminal step for communicating results rather than as an integral component of the analytical process. Visual analytics addresses this limitation by emphasizing human-computer interaction throughout the analytical workflow, enabling iterative exploration, hypothesis testing, and knowledge generation through interactive visual interfaces (Keim, 2008; Sacha, 2014; Thomas, 2006). Data scientists increasingly rely on computational notebooks for their flexibility in combining code, data, and visualization within unified environments (Chattopadhyay, 2020; Kosara, 2023). However, traditional notebook platforms face significant challenges, including a lack of reproducibility due to execution order dependencies, limited interactivity, difficult version control, and constrained deployment options (Chattopadhyay, 2020). These limitations create friction when transitioning from exploratory analysis to production systems, particularly for visual analytics applications requiring sophisticated interactive visualizations and real-time analytical capabilities (Barik, 2016; Haertel, 2023).

This research investigates the applicability of modern interactive visualization notebooks as comprehensive platforms for end-to-end data science and visual analytics pipelines. The solution artifact employs Marimo, an open-source Python notebook solution that addresses traditional notebook limitations through reactive cell execution and deterministic ordering, as well as a Python-code structure (Kluyver, 2016). The approach integrates multiple technologies, including object storage (e.g., MinIO) for centralized data repositories, analytical databases for efficient data management, and declarative visualization libraries based on Vega and Vega-Lite grammars for flexible interactive graphics (Heer, 2024; VanderPlas, 2018). The methodology is demonstrated through a space weather exploration use case examining the impact of solar activity on Global Navigation Satellite Systems (Su, 2019). The implementation follows the KDD process phases (Fayyad, 1996), beginning with the selection of the NEDM space weather model, which provides three-dimensional electron density estimates based on the F10.7 solar flux index combined with satellite orbital data (Hoque, 2022). The process commences with preprocessing to calculate rolling averages of solar activity indices and to derive satellite identifiers. Following this, transformations are performed to determine satellite positions using Simplified General Perturbations algorithms and to aggregate electron density across spatial grids. Data mining is utilized to create interactive visualizations of visible satellites, including their calculated electron content values. Ultimately, interpretation facilitates interactive selection and recalibration through user-driven dashboard interfaces.

The demonstrator effectively combines data management, processing, and interactive visual analytics into a cohesive notebook environment. This integration fosters streamlined workflows that reduce friction between disparate tools, enhances transparency through documented, reproducible analytical processes (Kosara, 2023), and facilitates real-time interactivity, enabling dynamic parameter adjustments and iterative exploration. Additionally, it provides extensive support for visual analytics that spans the entire knowledge-generation model, from data transformation to insight discovery (Sacha, 2014).

How to cite: Pohl, M. and Reibert, J.: Enhancing Data Science Pipelines through Interactive Environments for Visual Analytics of Spatiotemporal Data, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-19078, https://doi.org/10.5194/egusphere-egu26-19078, 2026.

10:55–10:57
|
PICO1b.6
|
EGU26-5309
|
On-site presentation
Marco Kulüke, Ivonne Anders, Karsten Peters-von Gehlen, Carsten Ehbrecht, Kameswar Rao Modali, and Hannes Thiemann

Climate and geoscience research increasingly relies on complex infrastructures and software to access, analyse, and reuse large and heterogeneous datasets. However, researchers often face fragmented data access, limited interoperability between platforms, and high entry barriers to cross-disciplinary data reuse. This conference contribution presents a user-centric infrastructure concept that combines the InterPlanetary File System (IPFS) software with the FAIR Digital Objects (FDO) standard to address these challenges and support intuitive research workflows.

At the core of the approach is the representation of geoscientific datasets as FAIR Digital Objects that bundle data and metadata into persistent and interoperable entities. From a user perspective, FDOs provide identifiers and provenance information that enable consistent discovery, access, and reuse of data across platforms and disciplines. Within this framework, IPFS acts as infrastructure software, providing a robust, decentralized, peer-to-peer, content-addressable file-sharing system that ensures data integrity, redundancy, and long-term accessibility, while being abstracted behind user-facing interfaces and workflows.

This infrastructure concept is illustrated through a user-driven test case derived from the ORCESTRA (Organized Convection and EarthCARE Studies over the Tropical Atlantic) campaign. ORCESTRA integrates satellite observations, airborne measurements, ground-based instrumentation, and climate model simulations, reflecting a wide variety of data sizes and types. User stories obtained from the campaign, such as comparing data from multiple sources, guided the design of the infrastructure concept. A demonstration shows how selected datasets were ingested into IPFS and exposed through an FDO-compliant catalogue, enabling unified access and seamless reuse across tools and platforms.

The presented test case illustrates how a user-driven IPFS-based software approach, together with the multidisciplinary FDO metadata standard, can be operationalized to enhance transparency, reproducibility, and hence, sustainability in geoscience research. By supporting interoperable and machine-actionable research assets, this infrastructure concept contributes to a more robust and future-ready geoscience software ecosystem. Beyond geoscience, this approach is transferable to other domains facing similar challenges in data-intensive, multi-instrument, and multi-model environments.

How to cite: Kulüke, M., Anders, I., Peters-von Gehlen, K., Ehbrecht, C., Modali, K. R., and Thiemann, H.: A User-Centric Software Infrastructure for Geoscience Data Using IPFS and FAIR Digital Objects, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-5309, https://doi.org/10.5194/egusphere-egu26-5309, 2026.

10:57–10:59
|
PICO1b.7
|
EGU26-18853
|
On-site presentation
The EMODnet Portal: how to present a unified data discovery and download service of complex and diverse European marine data to the public
(withdrawn)
Conor Delaney, Joana Beja, Tim Collart, Frederic Leclercq, and Bart Vanhoorne
10:59–11:01
|
PICO1b.8
|
EGU26-13219
|
On-site presentation
Marco Micotti, Elena Matta, Elisa Bozzolan, Simone Corti, Davide Cantaluppi, and Enrico Weber

Chemical pollution poses a growing threat to aquatic ecosystems and water resources across the Mediterranean region, driven by contaminants of emerging concern and complex land–sea interactions. Addressing this challenge requires tools capable of integrating datasets at multiple spatial and temporal scales and user-friendly interfaces to support knowledge sharing across diverse territorial and social contexts. Within this framework, the Water Information and Remediation Platform (iWIRE) has been developed as part of the EU Horizon Europe project iMERMAID (Innovative solutions for Mediterranean Ecosystem Remediation via Monitoring and decontamination from Chemical Pollution).

iWIRE is a web-based platform designed to collect, harmonise, and visualise environmental and water quality data, providing a unified entry point to explore site-specific characteristics, monitoring data, climate conditions, and the performance of remediation activities.

From a research software perspective, iWIRE addresses key challenges related to reproducibility, interoperability, and usability in geoscientific platforms. The system is built on a fully open-source technology stack and follows a modular design, allowing individual components to be updated, extended, or reused across different projects and environmental contexts.

The platform supports interoperability by integrating heterogeneous datasets from laboratory analyses, in situ sensors, climate services, satellite-derived datasets, and regulatory sources. Data ingestion is enabled through a wide range of input formats, from simple text files to standardised data structures and API-based connections. This approach enables seamless data exchange between tools and facilitates cross-disciplinary analyses spanning hydrology, environmental monitoring, and water treatment processes.

iWIRE has been tested across five Mediterranean pilot areas, providing concrete case studies that demonstrate its operational use in real-world environmental monitoring and remediation assessment. These examples highligh how research software can effectively bridge scientific analysis and decision-making in applied geoscience contexts.

The platform software architecture combines a modular Content Management System with interactive data-visualisation dashboards. Public-facing content and access management are handled through a Drupal-based frontend, while use-case dashboards are developed in Redash and dynamically connected to structured datasets hosted in relational databases, online spreadsheets, or accessed via APIs. This architecture enables near-real-time updates, flexible data integration, and consistent visualisation across heterogeneous data sources.

To address data-sensitivity constraints, the platform supports differentiated access levels, combining publicly accessible dashboards with restricted views for confidential wastewater treatment plant data.

How to cite: Micotti, M., Matta, E., Bozzolan, E., Corti, S., Cantaluppi, D., and Weber, E.: An Interoperable Web Platform for Water Quality Monitoring and Remediation in the Mediterranean: the iWIRE Platform, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-13219, https://doi.org/10.5194/egusphere-egu26-13219, 2026.

11:01–11:03
|
PICO1b.9
|
EGU26-22316
|
ECS
|
On-site presentation
Christina Carrozzo Hellevik, Dina Margrethe Aspen, Christian Klöckner, Erica Margareta Löfström, Ramzi Hassan, and Ricardo da Silva Torres

The complex environmental challenges we face require sound decision-making. As the update on the Planetary Boundaries Framework shows, we are now beyond a safe operating space for humanity. Environmental decision-support tools have the potential to guide decision-makers in addressing such complex challenges and ensuring safety for humans and ecosystems. However, many studies have highlighted a ‘use gap’ and recommend better tool evaluation practices, as these differ greatly across and within disciplines. In this systematic literature review, we investigate how environmental decision-support tools are currently evaluated by considering three types of parameters: tool-user interaction, user impacts, and tool effectiveness. We also systematize the data collection methods used to measure of these parameters. Based on the results, we map the tool-aided decision space and recommend adapted evaluation approaches based on the goals and focus of each study. We further propose a comprehensive framework to guide the choice of decision-support tool evaluation scope and methods.

How to cite: Hellevik, C. C., Aspen, D. M., Klöckner, C., Löfström, E. M., Hassan, R., and da Silva Torres, R.: Environmental Decision-Support Tool Evaluation: What Impacts Can Be Measured and How?, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-22316, https://doi.org/10.5194/egusphere-egu26-22316, 2026.

11:03–11:05
|
PICO1b.10
|
EGU26-19144
|
On-site presentation
Jewgenij Torizin and Nick Schüßler

Semantic segmentation of texture-rich Earth-science imagery (e.g. UAV and outcrop photographs) is common, but supervised segmentation workflows are often assembled from disconnected tools and still rely on labour-intensive, dense pixel-wise annotation. We present SegFlow, an end-to-end pipeline that integrates texture-patch curation, dataset synthesis, model training, experiment tracking, and inference for texture-centric segmentation.

SegFlow defines classes through curated texture patches and generates synthetic training composites and label masks using parameterised, procedural mask generation. This supports rapid model bootstrapping for initial training, reduces the amount of dense pixel-wise annotation required on real imagery, and helps keep label definitions consistent via versioned datasets and repeatable train/validation/test splits in a portable project structure.

For model development, SegFlow includes a PyTorch training interface centred on a configurable U-Net, with various segmentation losses and metrics. Training and inference are organised as scripted, job-based runs that capture data and model provenance in standardised run reports. For assisted segmentation, SegFlow combines the texture-focused U-Net with a prompt-based segmenter (Segment Anything Model, SAM) driven by sparse prompts (points or boxes). In our use case, SAM is helpful for object-like structures, whereas the U-Net is better suited to extended texture regions where prompting can be less stable. Outputs can be refined interactively, and corrected masks can be added back for iterative fine-tuning on real imagery.

We demonstrate the workflow on UAV imagery for geological outcrop mapping (e.g. chalk, glacial till, vegetation) and discuss how provenance tracking, label consistency, and hybrid assistance support reproducible iteration in Earth-science segmentation projects. SegFlow will be made available under GPLv3.

How to cite: Torizin, J. and Schüßler, N.: SegFlow: an end-to-end workflow for texture-centric image segmentation, from texture-patch curation to hybrid assistance, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-19144, https://doi.org/10.5194/egusphere-egu26-19144, 2026.

11:05–11:07
|
PICO1b.11
|
EGU26-14696
|
On-site presentation
Karsten Rink, Özgür Ozan Sen, Felix Raith, Fiorenza Deon, Nadine Haaf, Marcel Horovitz, Stefan Lüth, Edinsson Munoz, Bastian Rudolph, Christoph Schüth, Ingo Sass, Thomas Kohl, and Olaf Kolditz

GeoLaB is an underground research laboratory (URL) currently in planning stage, focussing on deep geothermal energy production in crystalline rock. Accompanying the planning, a virtual geographic environment is being built and updated with datasets as they become available. The focus of this data integration process are currently the multiple exploration campaigns that ensure that the planned site in southern Hesse, Germany, is suitable for building the URL and conducting experiments. Over the past two years, several  campaigns have been funded to acquire detailed seismic, geophysical, magnetic, and hydrological information in the area around the Tromm mountain ridge in the German Odenwald region. In addition, two exploration wells with a depth of 500 m have been drilled to gain knowledge about the structure of the crystalline rock and to ensure the selected site is suitable for the construction of an underground lab.
3D representations of acquired datasets have been created and are visualised in a unified geographic context in combination with datasets provided by state offices, such as fracture networks, topographic maps, buildings or protection areas, as well as geological information to gain new and detailed insights about geotechnical and hydrogeological conditions in this region. In addition, a hydrogeological simulation already provides information on groundwater and saturation and a structural model is currently set up for running coupled THM simulations. Our framework is based on VTK, with workflows for data processing, conversion, modelling and visualisation developed within the OpenGeoSys community. With over 500 datasets already gathered in the scope of the project, data management is handled by KADI (Karlsruhe Data Infrastructure), an open-source solution developed at KIT.
This contribution focusses on the combined 3D visualisation of campaign data acquired during the site selection process and aims primarily at planning and stakeholder information. As the project progresses, this will be expanded into a functional digital twin of the URL and all experiments as well as the surrounding area.

References:
Bremer, J., Kohl, T., Sass, I., Kolditz, O., Rudolph, B., Rühaak, W., Köbe, W., Dehmer, D., Schamp, J., Grimmer, J.C., Scheuvens, D., Schüth, C., Deon, F., Lüth, S., Haaf, N., Hoffert, U., Milsch, H., Giese, R., Zimmermann, G., Könitz, D., Rink, K., Şen, Ö.O., Goldstein, S., Jahn, M.W., Steinhülb, J., Bauer, F., Selzer, M., Schätzler, K. (2025):
GeoLaB annual report 2024. GeoLab, Karlsruhe, 126 pp. 10.5445/IR/1000184950

Kohl, T., Sass, I., Kolditz, O., Bremer, J., Rudolph, B., Schill, E. (2023):
The Large-Scale Helmholtz Research Infrastructure GeoLaB. Proc. of 48th Workshop on Geothermal Reservoir Engineering, Stanford, California. SGP-TR-224

How to cite: Rink, K., Sen, Ö. O., Raith, F., Deon, F., Haaf, N., Horovitz, M., Lüth, S., Munoz, E., Rudolph, B., Schüth, C., Sass, I., Kohl, T., and Kolditz, O.: Visualisation of 3D geoscientific campaign data for the GeoLaB, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-14696, https://doi.org/10.5194/egusphere-egu26-14696, 2026.

11:07–12:30
Lunch break
Chairpersons: Kostas Leptokaropoulos, Monika Staszek, Tobias Kerzenmacher
16:15–16:25
|
PICO1b.1
|
EGU26-3618
|
ECS
|
solicited
|
On-site presentation
Olivia L. Walbert, Frederik J. Simons, Arthur P. Guillaumin, and Sofia C. Olhede

We have developed theory, algorithmic tools, and two software suites (written in MATLAB and Python) that are openly available for use by the broad geosciences community for the statistical characterization of spatial datasets as finite, discrete random fields. Our software implements robust statistical methods that we have formulated for the simulation and estimation of stationary, isotropic, random fields on a potentially only partially observed grid within the Matérn class of parametric covariance functions. Parametric covariance models characterize the second-order structure of random fields by quantifying their shape through parameters for the amplitude, smoothness, and correlation length. Our tools allow for the analytical calculation of parameter uncertainty for modeled random fields that depend upon the parametric model and the sampling grid, agnostic of the data itself, allowing for the exploration of experimental design. Our software includes a plethora of visualization tools for studying spatial random fields and their sampling grids, including for interrogating the fit of a maximum-likelihood model (and its assumptions) to observed data. Our methodology is readily applicable for use by scientists from broad disciplines who work with (geo)spatial (ir)regularly gridded datasets.

We will present a workflow of our software to demonstrate through visualization the simulation, estimation, and analysis of spatial random fields. A typical modeling procedure for geoscientific applications involves spatial gridded data taken to be stationary, isotropic random fields under the null hypothesis. A single inversion routine estimates the Matérn covariance paramaters by optimizing the spectral-domain debiased Whittle likelihood, which involves the comparison between the modified periodogram and the parametric spectral density blurred by the effects of the observation window. We interpret the quality of our estimate, (1) by simulating additional realizations through a simulation routine that includes a circulant embedding approach, (2) by evaluating the goodness-of-fit of the model and its assumptions through multiple graphical- and test-statistic-based examinations of the model residuals, and (3) by quantifying parameter uncertainty by calculating their covariance from first principles, for which we have designed different implementations depending on the available hardware (prioritizing memory or speed). We provide documentation for multiple well-studied simulation, inversion, and analysis options with default functionality, version control, and extensive demos designed to familiarize users with not only the implementation of our tools, but also the underlying theory and its implications for their data. We share select case studies using real data that we hope will illuminate and inspire future applications, and provide a guide to our software.

Our open-source software is available on GitHub, and includes the MATLAB repositories github.com/csdms-contrib/slepian_juliet and github.com/csdms-contrib/slepian_lima, and the DSWL Python package github.com/arthurBarthe/debiased-spatial-whittle, which is in revision with the Journal of Open Source Software.

How to cite: Walbert, O. L., Simons, F. J., Guillaumin, A. P., and Olhede, S. C.: Robust Software for the Modeling of Spatial Random Fields across Geoscience Disciplines, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3618, https://doi.org/10.5194/egusphere-egu26-3618, 2026.

16:25–16:27
|
PICO1b.2
|
EGU26-6642
|
ECS
|
On-site presentation
Marta Alerany Solé, Kai Keller, Chihiro Kodama, Masuo Nakano, Tomoe Nasuno, Daisuke Takasuka, and Mario Acosta

As climate models advance toward higher resolutions, they become increasingly capable of resolving key Earth system processes, which in turn raises the need for robust and quantitative evaluation methods. In response to this challenge, we present pyhanami, an open-source Python package developed within the HANAMI project to assess the replicability and scientific skill of Earth System Models (ESMs) using statistical testing and objective, scalar-based metrics. Besides, to facilitate the practical application of these evaluations, pyhanami features a structured data interface that efficiently loads and inspects compatible model outputs.

An ESM is considered replicable if the same experiment run on different computing environments or with different compilers produces identical results, i.e., representing the same climate. This ensures that differences between simulations reflect only the intended scientific changes in the model setup. Because bit-for-bit replicability is often unattainable across environments due to the chaotic nature of climate models, our practical goal is to achieve statistical indistinguishability. Building on existing methodologies, pyhanami provides an ensemble-based replicability test that combines multiple statistical tests and metrics to determine whether two simulated ensembles are statistically indistinguishable, as described in (K.Keller et al., 2025; doi.org/10.5194/gmd-18-10221-2025). To the best of our knowledge, automated and standardized replicability assessment is not currently supported in model evaluation tools, despite its importance for climate model development, validation, intercomparison, and porting. 

Complementing replicability, the scientific skill of an ESM describes its ability to accurately reproduce observed features of the climate system, from regional patterns to large-scale teleconnections. Many existing tools to evaluate this skill rely on visualization-based diagnostics, which often require expert knowledge and can be biased by subjective interpretation. In contrast, scalar metrics and scores provide quantitative and comparable measures of scientific skill, which are essential for interpreting climate projections, guiding model development, and model intercomparison. However, diagnostics for physical processes that require km-scale, high-resolution global climate models to be properly resolved remain underrepresented in state-of-the-art diagnostic suites. Although several metrics have been proposed for such small-scale processes, many lack standardized and widely available implementations. As high-resolution climate simulations become more common, the demand for objective diagnostics to support model tuning and improvement is increasing. pyhanami addresses this need by providing a growing set of scalar scientific skill metrics that enable quantitative and easily interpretable evaluation of phenomena such as Tropical Cyclones and the Tropical Intraseasonal Oscillation (ISO), including the Madden-Julian Oscillation and the Boreal Summer ISO modes. 

By integrating replicability testing, scientific skill metrics, and visualization tools into a single, self-contained package with a generic data interface, pyhanami streamlines evaluation workflows and supports the development of reliable climate projections, advancing the quality and reproducibility of geosciences research.

How to cite: Alerany Solé, M., Keller, K., Kodama, C., Nakano, M., Nasuno, T., Takasuka, D., and Acosta, M.: Replicability testing and scientific skill quantification in Earth System Models with pyhanami, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-6642, https://doi.org/10.5194/egusphere-egu26-6642, 2026.

16:27–16:29
|
PICO1b.3
|
EGU26-21905
|
On-site presentation
Niall Robinson and the NVIDIA Earth 2

Earth2Studio is an open-source Python toolkit that turns state-of-the-art AI weather and climate models into composable, reproducible workflows that researchers and operators can run and adapt on their own infrastructure.  It targets a key bottleneck in AI-for-weather: the difficulty of moving from standalone model checkpoints to fully integrated forecasting systems that span data, models, uncertainty, and verification.

Earth2Studio provides a unified API for prognostic and diagnostic AI models, heterogeneous data sources, perturbation methods, metrics, and I/O backends, enabling users to assemble end-to-end inference pipelines with only a few lines of code.  The model zoo includes leading global and regional AI forecast models such as Altas, StormScope, GraphCast, Pangu, Aurora, FourCastNet 3, CorrDiff and more. Standardized data interfaces expose operational initial conditions and reanalyses
(e.g. GFS, HRRR, ERA5, IFS) through a shared Xarray-based vocabulary and coordinate system.

Building on the broader Earth-2 initiative, Earth2Studio is designed to cover the entire weather forecasting value chain including AI data assimilation for initial conditions, global medium-range prediction, generative downscaling, and kilometer-scale severe weather nowcsating.  Ensemble-ready perturbation schemes and built-in statistics (RMSE, ACC, CRPS, rank histograms, spread–skill diagnostics) allow consistent quantification of forecast skill and uncertainty across models, lead times, and regions, supporting methodologically robust intercomparison studies.

Released as OSS, Earth2Studio emphasizes openness and sovereignty: all core components are optimised to run on NVIDIA local or cloud platforms, enabling national meteorological services, research institutions, and industry users to integrate proprietary data and maintain ownership over operational chains.

Presented here, are the design principles of Earth2Studio, illustrative exemplar workflows, and a discussion of how this shared software infrastructure can help the EGU community accelerate AI weather research and bridge the gap between experimental models and operationally relevant forecasting systems.

How to cite: Robinson, N. and the NVIDIA Earth 2: Earth2Studio: An Open Inference Toolkit for AI Weather Forecasting, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-21905, https://doi.org/10.5194/egusphere-egu26-21905, 2026.

16:29–16:31
|
PICO1b.4
|
EGU26-12280
|
On-site presentation
Victoria Agudetse, Núria Pérez-Zanón, Ariadna Batalla, Carlos Delgado-Torres, Alberto Bojaly, Pierre-Antoine Bretonnière, Javier Corvillo, Eren Duzenli, Theertha Kariyathan, Aleksander Lacima-Nadolnik, Alba Llabrés-Brustenga, Bruno de Paula Kinoshita, Paloma Trascasa-Castro, Verónica Torralba, Albert Soret, and Francisco Javier Doblas-Reyes

Climate services leverage state-of-the-art knowledge, data and tools from the climate sciences to tailor services to user needs. To do so, climate service scientists ensure scientifically robust and traceable analyses to address specific, co-produced applications in sectors such as energy, agriculture and health. However, the diversity of post-processing methodologies applied to climate datasets, together with the variety of data sources (e.g. reanalyses, in situ observations, and climate predictions across different forecast horizons) and heterogeneous user requirements, makes the development and long-term maintenance of the required software a major challenge for the timely delivery of climate products that fulfill those user needs.

The SUbseasoNal to decadal climate forecast post-processing and asSEssmenT (SUNSET) is a software suite developed by the Earth Sciences Department at the Barcelona Supercomputing Center, building on extensive expertise in state-of-the-art climate science, climate service co-production and software development for HPC environments. SUNSET integrates in-house R-based software packages, including CSTools, CSDownscale and s2dv, which implement established methodologies for climate forecast post-processing, such as bias adjustment, statistical downscaling, verification and visualisation. The suite addresses key challenges commonly faced by climate service scientists, including the management of multiple forecast systems and reference datasets, the alignment of temporal dimensions (e.g. initialisation dates and forecast lead times with respect to reference datasets), and the consistent handling of hindcasts and observations to enable robust and comparable verification through cross-validation approaches. When requested, SUNSET uses the Autosubmit workflow manager to parallelise and orchestrate multiple workflows, ensuring efficient use of computational resources and the timely generation of climate products.

SUNSET currently delivers near real-time operational climate products, including the probability of the most likely tercile, the probability above or below specific percentiles, and absolute thresholds for essential climate variables. These products can also be tailored to sector-specific indicators, such as the growing degree days required by the agriculture sector. SUNSET verification workflows support the evaluation of the next generation of Copernicus Climate Change Service seasonal forecast systems within the CERISE project by providing comprehensive skill metrics and scorecard summaries. Together, these capabilities ensure successful research and service delivery in several projects, including ASPECT, BigPrediData, and BOREAS.

SUNSET is open-source and hosted in a public GitLab repository, following a structured development strategy with regular releases, continuous integration, and a dedicated conda environment to ensure reproducibility and long-term sustainability. Ongoing and future developments focus on extending methodological capabilities, improving usability, and optimising memory usage and workflow multi-node parallelisation for efficient execution on HPC systems.

How to cite: Agudetse, V., Pérez-Zanón, N., Batalla, A., Delgado-Torres, C., Bojaly, A., Bretonnière, P.-A., Corvillo, J., Duzenli, E., Kariyathan, T., Lacima-Nadolnik, A., Llabrés-Brustenga, A., de Paula Kinoshita, B., Trascasa-Castro, P., Torralba, V., Soret, A., and Doblas-Reyes, F. J.: SUNSET: Addressing key challenges for the successful provision of climate services, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-12280, https://doi.org/10.5194/egusphere-egu26-12280, 2026.

16:31–16:33
|
PICO1b.5
|
EGU26-21357
|
On-site presentation
Ashish Sharma, Suraj Shah, Yi Liu, and Seokhyeon Kim

Rain gauges provide accurate point-scale precipitation measurements but are often sparsely distributed, particularly in data-scarce and complex-terrain regions. In contrast, satellite and reanalysis precipitation products offer continuous spatial coverage, yet they are affected by retrieval uncertainty and systematic bias. Reliable precipitation estimation therefore requires the integration of gauge observations with gridded satellite products. Existing merging approaches, however, are frequently limited in their ability to directly reconcile point-based gauge measurements with gridded satellite fields and to flexibly incorporate multiple datasets within a single, coherent workflow. We present RainMerge, an open-source, web-based framework that integrates gauge observations with multiple satellite precipitation products using pixel-level, uncertainty-aware merging. The platform automates data acquisition, preprocessing, uncertainty characterization, and merging within a unified computational environment. Through an intuitive graphical interface, RainMerge abstracts technical and geospatial complexity, enabling users without programming expertise to generate research-grade precipitation estimates. By bridging gauge-dependent and gauge-independent merging strategies while improving accessibility through user-oriented software design, RainMerge supports reproducible precipitation data fusion and broadens the practical use of multi-source precipitation merging in hydrological applications.

How to cite: Sharma, A., Shah, S., Liu, Y., and Kim, S.: RainMerge: Open-Source Software for Unified Merging of Satellite and Gauge Precipitation Data, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-21357, https://doi.org/10.5194/egusphere-egu26-21357, 2026.

16:33–16:35
|
PICO1b.6
|
EGU26-19566
|
ECS
|
On-site presentation
Vikas K Patel, Michelle Cain, and Neil Harris

Climate decision-making increasingly requires tools that can translate complex climate science into easy-to-use information. We develop two open-source, complementary interactive dashboards, designed to support climate understanding across metrics-based assessment and analysis of temperature trajectories under a range of scenarios. The Climate Metrics Decision Dashboard (CMDD) provides a comprehensive yet simple framework for exploring a wide range of climate metrics spanning agriculture, aviation, precipitation, economy, and sea level rise. CMDD is designed to support informed interpretation of diverse metrics without requiring deep domain expertise. It’s a smart guide to navigating the world of climate metrics. It helps researchers, policymakers, and practitioners identify which metric best fits their goals, whether it’s tracking emissions, comparing warming impacts, or assessing progress toward sustainability targets. Instead of getting lost in technical jargon, CMDD helps to learn, compare, and choose all in one place. The dashboard includes thorough descriptions of metrics, guided workflows, recommendations, and accounting of both short-lived and long-lived climate pollutants, enabling users to assess their implications for climate-relevant outcomes.

Taking a similar approach, the FaIR Climate Explorer offers an accessible interface to the FaIR2.2 simple climate model, allowing users to simulate global temperature responses under different Shared Socioeconomic Pathway (SSP) scenarios. By abstracting model complexity behind an intuitive dashboard, the tool enables users with no prior familiarity with FaIR to explore scenario-driven temperature outcomes. Together, these dashboards demonstrate how interactive, user-centric tools can lower barriers to climate analysis while supporting both metrics-based evaluation and scenario-driven temperature exploration. They highlight the potential of dashboard-based approaches to enhance transparency, usability, and decision relevance in climate science and policy contexts.

How to cite: Patel, V. K., Cain, M., and Harris, N.: User-Centric Climate Dashboards for Metrics Evaluation and Temperature Scenario Exploration, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-19566, https://doi.org/10.5194/egusphere-egu26-19566, 2026.

16:35–16:37
|
PICO1b.7
|
EGU26-11926
|
On-site presentation
Michal Kollár, Martin Ambroz, Aneta A. Ožvat, Karol Mikula, Mária Šibíková, and Jozef Šibík

This contribution presents the software tools provided by NaturaSat [1], a robust and user-friendly application for the exploration and monitoring of Natura 2000 habitats using multi-source Earth observation data. The software enables users to visualize and jointly analyze Sentinel-2 and Sentinel-1 imagery, orthophotos, and UAV data within a single working environment.

We showcase the main functionalities of the software, including data import and management, interactive visualization, semi-automatic and automatic segmentation of input data, and spatio-temporal comparison of habitat boundaries. These tools support habitat mapping and allow users to track changes in habitat extent and ecological condition. In addition to segmentation and monitoring, the software also includes tools for classification, transformation of historical maps, and basic hydrological modeling.

The contribution focuses on the practical use of NaturaSat as a research and operational tool for botanists and environmental scientists. A case study illustrates typical user workflows, showing how the software combines different tools in an accessible way to support the analysis of habitat structure and change.

[1] NaturaSat, http://www.algoritmysk.eu/en/naturasat_en/

How to cite: Kollár, M., Ambroz, M., Ožvat, A. A., Mikula, K., Šibíková, M., and Šibík, J.: NaturaSat: Software tools for exploration and monitoring of Natura 2000 habitats using multi-source Earth observation data, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-11926, https://doi.org/10.5194/egusphere-egu26-11926, 2026.

16:37–16:39
|
PICO1b.8
|
EGU26-15074
|
ECS
|
On-site presentation
Alexis Hrysiewicz and Eoghan Holohan

In geosciences, geodesy and geotechnical engineering, Interferometric Synthetic Aperture Radar (InSAR) has demonstrated its ability to estimate, at millimetre scale, displacements of the Earth's ground surface. Although open-source SAR/InSAR software packages are robust, they are often not user-friendly, as users must have in-depth knowledge of SAR/InSAR methods, as well as computer skills. In addition, multiples software packages and scripts are often required for a complete workflow, which can make it difficult to meet to FAIR principles and to perform efficient data manipulation/analysis. EZ-InSAR is a versatile, user-friendly and open-source environment for SAR/InSAR computations that is now available in Python. Bridging several renowned open-source SAR/InSAR processors, EZ-InSAR now includes all the tools needed to perform complete SAR/InSAR time-series processing in a single environment. For example, automatic SAR imagery downloading, options for different time-series approaches, and tools for data visualisation and verification are provided. The new structure of EZ-InSAR, which is built with mandatory and optional EZ-InSAR Python modules, has been designed to facilitate community-led bug fixes, updates, testing, and rapid development. Users can now perform complex SAR/InSAR workflows in EZ-InSAR by implementing the toolkit in their Python scripts, by using the EZ-InSAR command line interface or by using EZ-InSAR’s evolved Graphical User Interface. All processing parameters are managed directly in the EZ-InSAR environment to ensure compliance with FAIR principles. The toolkit is also supported by comprehensive documentation. During the PICO session, we will show the use of EZ-InSAR for a complete computation of ground surface displacements at Campi Flegrei Caldera, Italy. This will highlight not only the efficiency of EZ-InSAR for monitoring of geohazards, but also why it is suitable for both new users of satellite Earth Observation data and expert users in SAR/InSAR remote sensing.

How to cite: Hrysiewicz, A. and Holohan, E.: EZ-InSAR-3: an open-source InSAR environment in Python, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-15074, https://doi.org/10.5194/egusphere-egu26-15074, 2026.

16:39–16:41
|
PICO1b.9
|
EGU26-18183
|
ECS
|
On-site presentation
Murat Şahin, Valentina Macchiarulo, Hao Kuai, Pantelis Karamitopoulos, and Giorgia Giardina

In the geosciences, the growing availability of multi-temporal satellite products has created new opportunities for monitoring the condition of the built environment. However, transforming large volumes of time series data into actionable information for decision-making remains a major challenge. This difficulty is particularly acute for infrastructure managers who must combine remotely sensed observations with geospatial network inventories to evaluate the performance and deterioration of existing assets. To address this gap, we developed the SafeBridge software package. 
SafeBridge supports the derivation of bridge damage indicators by processing Multi-Temporal Interferometric Synthetic Aperture Radar (InSAR) time series through geospatial operations tailored to individual bridge assets within a network. The package offers a fast and efficient framework for computing structural health indicators, featuring workflows that can run on either high-performance computing (HPC) systems or standard, readily available hardware when HPC resources are unavailable. 
To lower the barriers for new users and facilitate communication of reproducible methods, SafeBridge includes documentation, an example-driven tutorial, synthetic demonstration datasets, and automated report generation. We describe in detail an end-to-end workflow that incorporates infrastructure geometries and MT-InSAR time series, performs topology-aware geospatial processing, and produces comprehensible damage indicators and summary outputs appropriate for screening, prioritisation, and downstream integration on transportation assets and networks. 
SafeBridge package provides a practical route from research code to reusable software, while maintaining scientific transparency and reproducibility. We contribute reusable interoperable software building blocks for infrastructure-focused Earth Observation applications and highlight best practices for user-centric research software dissemination by making this tool available under open licenses with clear APIs and useful examples. 

How to cite: Şahin, M., Macchiarulo, V., Kuai, H., Karamitopoulos, P., and Giardina, G.: SafeBridge: open-source software to translate InSAR time series into actionable damage indicators for infrastructure monitoring, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-18183, https://doi.org/10.5194/egusphere-egu26-18183, 2026.

16:41–16:43
|
PICO1b.10
|
EGU26-7704
|
On-site presentation
David Ham, Connor Ward, Pablo Brubeck, Joshua Hope-Collins, and Leo Collins
Computer simulations of continuous processes described by partial differential equations are a bedrock of geoscientific simulation. Each simulation is a complex composition of equations, discretisations, solvers and parameterisations. Realistic geoscientific simulation also depends on the integration of observed data either as facing functions or through data assimilation. The result of this complexity is that creating new models, or even extending existing ones, can often be exceptionally resource intensive, even for large and highly capable institutions.
 
Firedrake (https://www.firedrakeproject.org/) offers a revolutionary different approach to model creation. Rather than coding the implementation of a model in low level code in a compiled language, Firedrake users write the mathematical formulation of their model in high-level Python. The high performance, parallel implementation of that code is then automatically generated and executed. Users have access to:
 
  • A huge range of finite element discretisations for any PDE they choose, including generalisations of the various variables staggerings that are typically used across the geosciences.
  • Programmable, composable solvers and preconditioners, including algebraic and geometric multigrid approaches, and physics-based preconditions based on the characteristics of the system being solved.
  • Seemless coupling to external processes, including the ML frameworks JAX and PyTorch.
  • Fully automated adjoint computations: the adjoint to a Firedrake simulation is available with no additional coding required.
  • Integration with optimisation algorithms for data assimilation.
 
Firedrake already provides the basis for:
  • The GUSTO toolkit, used for dynamical core development research at the Met Office and University of Exeter (https://www.firedrakeproject.org/gusto/).
  • The Thetis coastal ocean model (Kärna et al. 2018)
  • G-ADOPT: The Geoscientific ADjoint Optimisation PlaTform for mantle convection and glacial isostatic adjustment from the Australian National University (Ghelichkhan et al 2024). 
As well as hundreds of bespoke simulations by users around the world.
 
This PICO will present the key features of Firedrake and illustrate the applications to which it is put.
 
References
 
Ghelichkhan, Sia, et al. "Automatic adjoint-based inversion schemes for geodynamics: reconstructing the evolution of Earth's mantle in space and time." Geoscientific Model Development 17.13 (2024): 5057-5086.
Kärnä, Tuomas, et al. "Thetis coastal ocean model: discontinuous Galerkin discretization for the three-dimensional hydrostatic equations." Geoscientific Model Development11.11 (2018): 4359-4382.

 

How to cite: Ham, D., Ward, C., Brubeck, P., Hope-Collins, J., and Collins, L.: Firedrake - automated, differentiable building blocks for geoscientific simulation, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-7704, https://doi.org/10.5194/egusphere-egu26-7704, 2026.

16:43–18:00
Login failed. Please check your login data. Lost login?