ITS1.11/ESSI1.10 | Digital Twins in Earth Systems: Bridging Data and Predictive Modelling for Resilient Futures
EDI
Digital Twins in Earth Systems: Bridging Data and Predictive Modelling for Resilient Futures
Convener: Romain Chassagne | Co-conveners: Lorenzo NavaECSECS, Monique Kuglitsch, Elena Xoplaki, Bertrand Le Saux, Florian Wellmann, Denise DegenECSECS
Orals
| Tue, 05 May, 08:30–12:25 (CEST)
 
Room D2
Posters on site
| Attendance Wed, 06 May, 08:30–10:15 (CEST) | Display Wed, 06 May, 08:30–12:30
 
Hall X4
Posters virtual
| Mon, 04 May, 14:09–15:45 (CEST)
 
vPoster spot A, Mon, 04 May, 16:15–18:00 (CEST)
 
vPoster Discussion
Orals |
Tue, 08:30
Wed, 08:30
Mon, 14:09
The development of digital twins in Earth systems, such as Destination Earth, is revolutionizing our approach to understand and manage our planet’s complex dynamics under a changing climate. These advanced simulations enable us to integrate diverse types and sources of data, providing a comprehensive view of Earth-climate dynamics and human-environment interactions. In detail, digital twins allow to replicate a system behaviour, provide an up-to-date status of ongoing physical processes, support informed decision-making. They enable predictive Earth observation, exploring "what if" scenarios or simulating hazard cascades, and testing various adaptation strategies.

This session will explore the role of digital twins for bridging observations and simulations to applications in impact sectors. There will be a special focus on uncertainty quantification, data assimilation, multi-source data streams, hybrid modelling, and decision support. We are particularly interested in studies that highlight the synergies between digital twin technology and other AI-driven tools, such as predictive analytics and machine learning, in improving operational outcomes. This session aims to foster cross-disciplinary dialogue on how these converging technologies can accelerate resilience to climate-related risks and natural hazards, including in a variety of impact sectors (energy, food, carbon storage, etc…) and will extend to economic, social components and policy considerations. It will act as a forum for researchers and practitioners to share their insights and recent developments in this rapidly evolving field.

Orals: Tue, 5 May, 08:30–12:25 | Room D2

The oral presentations are given in a hybrid format supported by a Zoom meeting featuring on-site and virtual presentations. The button to access the Zoom meeting appears just before the time block starts.
Chairpersons: Romain Chassagne, Bertrand Le Saux, Elena Xoplaki
08:30–08:40
|
EGU26-23011
|
Highlight
|
On-site presentation
Caterina Negulescu, Pierre Gehl, Samuel Auclair, Didier Bertil, Yoann Legendre, Romain Guidez, Hajatiana Ramambazafy, Franck Chan Thaw, Cecile Gracianne, Roser Hoste Colomer, Agathe Roulle, and Gilles Grandjean

Digital Twins (DTs) are increasingly used as integrative frameworks to combine data streams, numerical models and automated workflows for monitoring complex systems and supporting decision-making. In the field of seismic risk management, operational DTs must rely on fast, robust and reproducible modelling approaches, capable of assimilating real-time observations despite strong epistemic uncertainty. This contribution presents an operational earthquake DT implemented on the VIGIRISKS platform, and illustrated through two complementary rapid-response tools: SEISAid, dedicated to territorial-scale impact assessment, and B-Wave, focused on near real-time structural damage monitoring.

Rather than relying on detailed physics-based representations of subsurface processes, the proposed DT is built upon empirical ground-motion models and vulnerability models, which can be considered as meta-models linking observed seismic signals to expected ground motion and damage. Real-time seismic data from regional and national monitoring networks are continuously ingested through Pulsar approach. Seismic intensity fields are generated using the USGS ShakeMap framework, which embeds data weighting and uncertainty propagation to combine ground-motion prediction equations, instrumental recordings, macroseimic observations, and site-effect information. These ShakeMap products are then encapsulated within the VIGIRISKS infrastructure, where they trigger automated impact assessment workflows.

At the territorial scale, SEISAid exploits ShakeMap outputs and empirically calibrated vulnerability models to estimate building damage and potential human losses within 15–30 minutes after earthquake detection. Calculations are performed using reproducible scientific codes hosted on VIGIRISKS, and results are automatically aggregated and disseminated to decision-makers through standardized notification reports. This workflow supports rapid situational awareness and early operational decision-making under uncertainty.

At the structural scale, B-Wave extends the DT by integrating recorded dynamic responses from instrumented buildings. Damage assessment relies on data-driven signal processing methods, such as continuous wavelet transform–based frequency identification, to detect changes in structural dynamic properties. These changes are empirically related to damage states aligned with European EMS-98 classes, enabling near real-time alerts on the condition of critical structures without requiring detailed mechanical models.

A key characteristic of the framework is its event-driven and iterative cycle: each new earthquake updates data, models and outputs, progressively enriching the DT. By embedding empirical modelling, uncertainty handling and updating (via ShakeMap), and automated decision support within a unified infrastructure, this work illustrates how DT concepts can be operationally implemented for natural risk applications, contributing methodological insights relevant to subsurface-related DT workflows focused on data integration and decision support. Although this contribution focuses on the event-driven DT cycle triggered by real earthquakes, the proposed framework also enables “what-if scenario” based impact assessments, illustrating the flexibility of the DT for both operational response and prospective risk analysis. 

How to cite: Negulescu, C., Gehl, P., Auclair, S., Bertil, D., Legendre, Y., Guidez, R., Ramambazafy, H., Chan Thaw, F., Gracianne, C., Hoste Colomer, R., Roulle, A., and Grandjean, G.: An Operational Earthquake Digital Twin Based on Empirical Ground-Motion Models and Period Estimation: Integration of SEISAID and B-Wave within the VIGIRISKS Platform, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-23011, https://doi.org/10.5194/egusphere-egu26-23011, 2026.

08:40–08:50
|
EGU26-20704
|
On-site presentation
Erlend Storrøsten, Brian Carlton, Valentina Magni, Naveen Ragu Ramalingam, Steven J. Gibbons, and Finn Løvholt

Recent advancements in the Digital Twin Component for Tsunamis, developed within the EU-funded DT-GEO project, are transforming rapid hazard assessment from static pre-computed databases to dynamic, data-informed workflows.  In this presentation, a novel workflow for Probabilistic Tsunami Forecasting (PTF) due to earthquake-triggered landslides is presented through a site demonstrator for the Mediterranean Sea motived by the 1908 Messina Strait earthquake and tsunami. A key innovation is the integration of earthquake-triggered submarine landslides and the application of AI driven inundation emulators for rapid prediction linked to earthquake workflows and related shakemaps. In addition, we showcase possible use of the workflow for new geophysical settings for a submarine slope in Southwest India. These synergies between digital twin architectures and machine learning provide a robust framework for anticipatory action and disaster risk management at both regional and global scales.  

This work was partially funded by the EU DT-GEO project (A Digital Twin for GEOphysical extremes, https://dtgeo.eu/) through the European Union’s Horizon Europe research and innovation programme under grant agreement nº 101058129 and PCTWIN project, jointly funded by the Natural Environment Research Council (NERC), UKRI and the Ministry of Earth Sciences (MoES), Government of India (Grant: NE/Z503496/1). 

How to cite: Storrøsten, E., Carlton, B., Magni, V., Ragu Ramalingam, N., Gibbons, S. J., and Løvholt, F.: Multi-Source Tsunami Hazard Assessment for Digital Twin Workflows, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-20704, https://doi.org/10.5194/egusphere-egu26-20704, 2026.

08:50–09:00
|
EGU26-17851
|
On-site presentation
Anil Yildiz and Julia Kowalski

An informed-decision making in managing risks due to climate-driven hazards for emergency response, designing preventive interventions, or policymaking for future requires either short-term and scenario-based assessments or long-term and uncertain assessments. Data requirements, spatial and temporal scales, observations required, and modelling techniques employed change drastically depending on the scope of the risk assessment. Digital twins (DT) in applications for natural hazards provide a great opportunity for significant improvements in disaster management. What makes DTs possible today is various technological advancements such as embedded sensors, cloud computing, edge computing, IoT. However, DTs also require a digital representation of the physical counterpart, mostly in the form of a computational or a data-driven model, to be able to predict future states. The utilisation of complex computational models in DTs is generally hindered by their relatively high computational budget and runtimes. A pathway to involve such models in (near) real-time decisions in DTs for geohazards is surrogate modelling. They are statistically valid representations of the computational model, into which physical laws and constraints can be embedded. Physics-compliant, physics-based or physics-informed surrogate models can facilitate DTs with i) instantaneous predictions, ii) the ability to conduct uncertainty quantification and sensitivity analysis to ensure reliability, iii) online updating of model parameters based on advanced calibration routines, iv) increased trust due to explainability based on physical laws. We present herein surrogate modelling as an enabler to replace computational models predicting the runout behaviour of geophysical flows. We investigate their applicability in uncertainty quantification, global sensitivity analysis, Bayesian parameter estimation, Bayesian model selection, and optimal experimental design. We demonstrate our workflow with two open-source computational models, r.avaflow 4.0 and synxflow, with synthetic and real-world case studies.

How to cite: Yildiz, A. and Kowalski, J.: Surrogate modelling as enabling methodology for predictive Digital Twins in geohazards, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-17851, https://doi.org/10.5194/egusphere-egu26-17851, 2026.

09:00–09:10
|
EGU26-1580
|
ECS
|
On-site presentation
Enhancement Digital Twin Application for Disaster Management by UNOOSA/UN-SPIDER
(withdrawn)
Jumpei Takami
09:10–09:20
|
EGU26-14599
|
ECS
|
On-site presentation
Saman Ghaffarian

Disaster risk management (DRM) faces increasing challenges due to urbanisation, environmental degradation, and the growing complexity of interacting hazards. Digital Twins (DTs), defined as digital representations of physical systems connected through continuous data exchange, have gained attention for their potential to support monitoring, simulation, and decision-making. However, their application to disaster contexts remains limited, as many DT implementations depend on uninterrupted automated data streams, predefined control mechanisms, and automated interventions that are often unavailable or impractical during disasters.

In this study, the Digital Risk Twin (DRT) is introduced as a paradigm specifically designed for DRM. The DRT extends DT concepts by integrating automated and manual data collection methods, such as IoT, remote sensing, surveys and field observations, while incorporating human-in-the-loop decision-making for flexible and effective interventions, maintaining real-time virtual simulations, and addressing disaster scenario challenges. To demonstrate its practical relevance, an example of how a DRT can be conceptualised for a multi-hazard response case study is formulated, illustrating how DRT can support effective DRM.

The DRT integrates diverse data sources such as remote sensing, in situ observations, field surveys, and community-based reporting, while supporting both automated analysis and expert-driven interpretation. A defining feature of the framework is the explicit inclusion of human decision-making within the digital representation. Rather than aiming for full automation, the DRT enables iterative interaction between digital models and stakeholders, supporting context-aware decisions under uncertainty. This is particularly important in disaster situations where data gaps, infrastructure damage, and rapidly changing conditions constrain the effectiveness of purely automated systems.

Digital Risk Twins represent a conceptual advancement over original Digital Twins by addressing the socio-technical nature of disaster risk. The proposed framework and multi-hazard conceptualisation provide a foundation for future operational implementations, with the potential to strengthen adaptive capacity and resilience to cascading and compound hazards.

How to cite: Ghaffarian, S.: Digital Risk Twins: The Next Generation of Digital Twins for Complex Disaster Scenarios, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-14599, https://doi.org/10.5194/egusphere-egu26-14599, 2026.

09:20–09:30
|
EGU26-9537
|
ECS
|
On-site presentation
Denise Degen, Yulia Gruzdeva, Nicolas Hayek, Marthe Faber, Cristian Siegel, and Mauro Cacace

The development of digital twins for subsurface applications faces several challenges, in this contribution we are focusing on the issue of providing near real-time predictions for numerical multi-physics applications describe by partial differential equations. Even when fronted against state-of-the-art high-performance computing infrastructures, conventional multi-physics simulations are not real-time compatible because of their huge computational demand. At the same time, they are subject to uncertainties from, for instance, the geometry, material properties, and boundary conditions.

To address the computational demand, we introduce the usage of surrogate models. Surrogate models comprise data-driven and physics-based approaches. While data-driven techniques, such as neural-networks, well capture complex system responses, they typically lack interpretability, hindering the degree of reliability of the model outcomes. This, in turn, poses challenges for the integration into digital twins especially in applications where risks need to be assessed. In contrast, physics-based approaches are fully interpretable, but often limited to elliptic and parabolic partial differential equations. Hence, they cannot capture the full complexity of the systems dynamics. To overcome the limitations of both data-driven and physics-based techniques, we introduce a hybrid approach namely the non-intrusive reduced basis method within the class of projection-based model order reduction techniques.

In this contribution, we demonstrate for a geothermal case study how this interpretable physics-based AI method can be used to reliably and efficiently accelerate the high-fidelity numerical multi-physics simulations. Furthermore, we illustrate their integration into a Bayesian uncertainty quantification framework, including hierarchical approaches. At last, we discuss possibilities to extend the aforementioned approaches to allow for a continuous integration of observational data.

How to cite: Degen, D., Gruzdeva, Y., Hayek, N., Faber, M., Siegel, C., and Cacace, M.: Perspective of Interpretable Physics-Based AI method for Digital Twins of Geosystems, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-9537, https://doi.org/10.5194/egusphere-egu26-9537, 2026.

09:30–09:40
|
EGU26-10068
|
ECS
|
On-site presentation
Guofeng Song, Denis Voskov, Hemmo A. Abels, Philip J. Vardon, and Sebastian Geiger

Geothermal energy plays a key role in energy transition by offering a clean baseload alternative to fossil fuels for space heating. Long-term geothermal production is subject to inherent uncertainty due to the heterogeneity of geological formations that host the geothermal resource, and the limited data available to characterize and quantify these heterogeneities. It is insufficient to explore and quantify such uncertainty based on a single concept or interpretational scenario. The TU Delft campus geothermal project has been initiated to provide a dedicated research environment with the vision to scale-up the deployment of geothermal energy as well as providing and storing heat for the TU Delft campus. Inspired by the reservoir that hosts the geothermal resource at TU Delft - a channelised fluvial system - we are presenting a framework of an open-source digital twin for geothermal reservoirs that aims to integrate geological scenario modelling, production simulation, uncertainty analysis, and data assimilation to mitigate operational risks, reduce maintenance costs, extend reservoir longevity, and enhance the overall sustainability for geothermal production.

We propose a scenario-based geological modelling approach using Rapid Reservoir Modelling (RRM), in which channelised fluvial layer templates are stacked and constrained by facies information along well trajectories. Multiple geological scenarios with distinct channel distributions are generated. Heterogeneous petrophysical properties are then assigned to different facies in the reservoir models. Uncertainties in both, reservoir architecture and petrophysical properties, are captured. The flow and thermal simulations are performed with the open-source Delft Advanced Research Terra Simulator (open-DARTS), and production uncertainty is quantified by evaluating the impact of reservoir architectures and petrophysical heterogeneities. The Ensemble Smoother with Multiple Data Assimilation (ESMDA) is then applied across these scenarios to constrain production and reservoir forecasts using well temperature and pressure observations, tracer tests, and related monitoring data. Scenarios that fail to reproduce the observations after data assimilation are falsified, while data-worth analysis is conducted on the remaining plausible scenarios to provide a reliable evaluation of data acquisition strategies and identify the most cost-effective options for reliable assessment of geothermal production.

Our digital-twin framework enables us to explore a broader range of geological uncertainties and constrains production uncertainties, thereby enabling a more reliable assessment of geothermal reservoir performance and production forecasts, both of which are essential for optimizing operational strategies and supporting informed decision-making for geothermal systems.

How to cite: Song, G., Voskov, D., Abels, H. A., Vardon, P. J., and Geiger, S.: Towards a digital twin for modelling geothermal reservoirs in channelised fluvial systems , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-10068, https://doi.org/10.5194/egusphere-egu26-10068, 2026.

09:40–09:50
|
EGU26-7482
|
On-site presentation
Eleftheria Exarchou, Mirta Rodriguez Pinilla, Veronica Martin Gomez, Marc Benitez Benavides, Martin Senande Rivera, Diego Bueso, Foteini Baladima, Guillem Canaleta, Mariona Borràs, Eleni Toli, and Panagiota Koltsida

Wildfires pose a growing threat to populated areas of the Mediterranean basin. Rural abandonment has increased fuel loads, creating appropriate conditions for large wildfires. The hot and dry conditions caused by climate change have exacerbated the risk, extent, and severity of wildfires. The rising number of homes in the wildland-urban interface (WUI) implies increasing impacts on lives and property from wildfires. The need for mitigation and adaptation measures against wildfire risk is thus becoming more urgent. The Barcelona Metropolitan Area, a large metropolis with an extended WUI (with more than 20000 inhabitants), is particularly vulnerable. A part of its population and infrastructure is located near the border of the Collserola Natural Park (8000 hectares with 6 million visitors yearly), an extended and concurred forested area, and could be potentially threatened by large forest fires, becoming at the same time a threat for the whole metropolitan area. 

This study presents a Digital Twin (DT) framework for the Barcelona Metropolitan Area, designed to assess the risk of extreme wildfires, and how it is impacted by heatwaves and droughts under different future emission scenarios. The DT-WILDFIRE leverages high-resolution climate model projections, satellite data, local observations, and advanced machine learning (ML) techniques to provide a granular understanding of future climate risks and their cascading impacts on wildfires. 

To quantify the fire risk, we calculate the Fire Weather Index (FWI), a widely recognized metric used to assess the potential for wildfire occurrence and spread, based on prevailing meteorological conditions. We calculate FWI over Catalonia at a resolution of 1.5 km during the historical period, using the EMO1 database. Validation against ERA5Land-derived FWI shows good agreement. This high-resolution FWI will then be used to downscale future FWI projections from climate models, thereby providing greater spatial detail in analyses of future climate change impacts on wildfires in the region. 

Further assessment of wildfire risk is provided by the wildfire susceptibility prediction model, based on the machine learning algorithm XGBoost.  The model is implemented over Catalonia and trained using diverse variables, including population density, electrical power infrastructure, terrain elevation, Normalized Difference Vegetation Index, land cover classifications, FWI, and historical burned area data. The model generates daily wildfire susceptibility maps at regional scale. Model evaluation based in the quadratic weighted Kappa metric indicates moderate to good predictive skill over most of the domain, except in high-elevation areas. Further detailed investigation in these regions is ongoing.  

Future climate risk related to wildfire drivers, such as droughts and heatwaves, is also assessed. To achieve the required resolution, we apply deep learning downscaling methodologies to produce future climate projections at very high resolution (0.8km). 

Finally, the DT aims at quantifying physical damage to residential and commercial real estate, including damage from smoke and business interruption. Ultimately, DT-Wildfire aims at helping authorities and society design participatory risk reduction measures, including nature-based solutions, according to the different climate scenarios.  

How to cite: Exarchou, E., Rodriguez Pinilla, M., Martin Gomez, V., Benitez Benavides, M., Senande Rivera, M., Bueso, D., Baladima, F., Canaleta, G., Borràs, M., Toli, E., and Koltsida, P.: A Digital Twin for Wildfire risk adaptation planning: DT-WILDFIRE, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-7482, https://doi.org/10.5194/egusphere-egu26-7482, 2026.

09:50–10:00
|
EGU26-9595
|
On-site presentation
Dominik Laux, Johanna Wahbe, Danica Rovó, Pranay Pratik, Veronika Pörtge, Lukas Liesenhoff, and Julia Gottfriedsen

Wildfires are a major type of disaster and challenge for economic prosperity, public health and safety around the globe. Decision Intelligence, particularly AI based scenario analysis, can make a significant difference [1] in disaster mitigation efforts. Data-driven methods have shown promise in various downstream applications [2]. Still, reference data remains a significant bottleneck across domains such as fire behaviour modeling.

We develop three data-driven decision intelligence tools: a novel machine learning based fire spread model, a fire break placement recommender, and triage decision support.
We make use of data from OroraTech’s global near-real-time fire monitoring network, which provides hotspot data from both public and proprietary satellites, in addition to burned area products.
We have created a novel dataset with thousands of fires from the US, Chile and Europe between 2022-2025. We enriched the thermal hotspot-based fire perimeters with a variety of EO (land cover, soil moisture, elevation, previously burned area, vegetation index) and non-EO (wind, temperature, relative humidity, dew point, and precipitation) data.

With this dataset, we train fire spread prediction models based on leading DL architectures. Graph Neural Networks (GNN) are particularly promising, since they have excelled in related domains such as weather forecasting [3], and showed promising spatial generalization properties for fire spread [4]. To mitigate uneven satellite overpass intervals, we treat the time gap between input-target images as an additional learning signal.
A major hurdle in the operational use of fire intelligence tools is a lack of user trust. Therefore, we incorporate explainability metrics in all three of key contributions.

The use of fire breaks - creating “barriers” of non-burnable materials to prevent fires from spreading - is a significant tactic in wildfire management. Scenario analysis tools are essential to inform the placement of fire breaks. Despite recent progress, significant challenges remain in this domain, such as reliance on basic fire spread simulators, and a complex action space for fire break placement [1]. We aim to close this gap by coupling our improved fire spread model combined with reinforcement learning, a promising approach pioneered in a recent case study [1] for fire break recommendations.

In conclusion, we present a novel fire dataset and operational tools for global, real-time fire spread modeling and  firebreak placement supporting wildfire management worldwide. 

References

[1]Murray,L.,Castillo,T.,Carrasco,J.,Weintraub,A.,Weber,R.,deDiego,I.M.,...&GarcíaGonzalo,J.(2024).Advancing Forest Fire Prevention: Deep Reinforcement Learning for Effective Firebreak Placement.arXiv preprint arXiv:2404.08523.

[2]Bot,K.,&Borges,J.G.(2022).A systematic review of applications of machine learning techniques for wildfire management decision support.Inventions,7(1),15.

[3]Lam,R.,Sanchez-Gonzalez,A.,Willson,M.,Wirnsberger,P.,Fortunato,M.,Alet,F.,...&Battaglia,P.(2023).Learning skillful medium-range global weather forecasting.Science,382(6677),1416-1421.

[4]Rösch,M.,Nolde,M.,Ullmann,T.,&Riedlinger,T.(2024).Data-Driven Wildfire Spread Modeling of European Wildfires Using a Spatiotemporal Graph Neural Network.Fire,7(6),207.

How to cite: Laux, D., Wahbe, J., Rovó, D., Pratik, P., Pörtge, V., Liesenhoff, L., and Gottfriedsen, J.: FireAID - Real-time Wildfire Spread Modeling with Machine Learning, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-9595, https://doi.org/10.5194/egusphere-egu26-9595, 2026.

10:00–10:10
|
EGU26-7450
|
ECS
|
On-site presentation
Rakibun Athid, Dr. Mila N. Koeva, and Dr. Pirouz Nourian

Digital twins as complex decision-making systems are increasingly used in climate adaptation and sustainability planning. However, most of the applications currently available remain largely sector-oriented, thus limiting their capacity to capture interactions between different domains with multiple interrelated indicator systems. This constraint is particularly applicable at the neighborhood scale, where planning interventions are applied, and trade-offs between competing objectives become most visible.  

This work introduces the prototype of a neighborhood-scale digital twin system, which is designed to support integrated, scenario-based analysis of urban ecology and energy systems. The digital twin implemented in the post-war residential neighbourhood of Twekkelerveld, Enschede, the Netherlands, attempts to solve major issues, such as the ageing building stock, limited green infrastructure, and relatively high energy demand. The framework incorporates open and municipal datasets, including tree inventories, green spaces, urban heat potential, building geometry, energy-use intensity, energy estimation, solar electricity potential, and carbon footprint. The system explicitly represents interactions among ecological and energy interventions at the neighbourhood level, unlike the existing tools.

The digital twin is designed to facilitate interactive "what-if" exploration of typical urban interventions across multiple domains. Ecological scenarios, such as tree planting strategies and green facade deployment, enable users to assess the impacts on greenness, urban heat mitigation, carbon sequestration, and investment costs. Energy scenarios include building insulation improvements, rooftop solar deployment, heat pump transitions, and local energy sharing, measured by the indicators on the level of the neighborhood and buildings. The interrelation module explicitly connects the ecological and energy measures, which allow the comparison of the combined effects on cooling, energy demand, emissions, and overall performance.

Instead of making sustainability planning a one-sector endeavour, the prototype assists in the exploration of options: what changes, what gets better, and what gets worse when various measures are combined. Presenting baseline and scenario outcomes side by side makes trade-offs clearer across ecological, energy, and environmental indicators. The work shows how neighbourhood-scale digital twins can operationalise multi-domain data and scenario logic in a form that is usable by urban planners, municipalities, and local decision-makers. This complements Earth system-scale digital twins, which are centered around the local level where interventions are discussed and implemented.

How to cite: Athid, R., Koeva, Dr. M. N., and Nourian, Dr. P.: Digital Twin For Improvement of The Sustainability of Neighbourhoods Through Scenario Planning, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-7450, https://doi.org/10.5194/egusphere-egu26-7450, 2026.

Coffee break
Chairpersons: Bertrand Le Saux, Romain Chassagne, Denise Degen
10:45–10:55
|
EGU26-13008
|
solicited
|
On-site presentation
Andrea Toreti, Arthur Hrast Essenfelder, and Valerio Lucarini

In recent years, advancement in computational infrastructures has made possible to start exploiting the implementation and use of digital twins in climate science. A growing number of studies and prototypes have already appeared, aiming at modelling single or multiple components of the Earth system. Among them, it is worth mentioning the European Commission's Destination Earth initiative with the ambition of realizing a digital replica of the Earth. While the development of digital twins seems straightforward and is proceeding at fast pace, there are still some key conceptual issues and challenges to overcome and go beyond classic numerical models and digital shadows. Realising continuous bidirectional data flow between the virtual system and the real one is among them. Together with innovative approaches in data assimilation and the integration of physics-consistent machine learning, there is the need to conceptualize what continuous data loop means at time scales covering the coming years and decades. Furthermore, the need to address the "human-in-the-loop" requirement remains central to allow for actionable "what-if" scenario testing. In this contribution, we discuss these open issues as well as the minimum requirements twins should have. We conclude by proposing pathways to fulfil the ambition of having a digital twin of the Earth system. 

How to cite: Toreti, A., Hrast Essenfelder, A., and Lucarini, V.: Digital twins in climate science: challenges and opportunities , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-13008, https://doi.org/10.5194/egusphere-egu26-13008, 2026.

10:55–11:05
|
EGU26-21582
|
ECS
|
On-site presentation
Theresa Kiszler, Jenni Kontkanen, Brynjar Sigurdsson, Bruno de Paula Kinoshita, Pierre-Antoine Bretonniere, Devaraju Narayanappa, Mario Acosta, Suraj Polade, Outi Sievi-Korte, Thomas Jung, Daniel Klocke, Francisco Doblas-Reyes, Nikolay Koldunov, Aina Gaya-Àvila, Jost von Hardenberg, Paolo Davini, Barbara Frueh, Stephan Thober, Sebastian Milinski, and Francesc Roura Adserias and the Climate DT team

The Climate Change Adaptation Digital Twin (Climate DT), developed as part of the Destination Earth Initiative, produces global multi-decadal kilometer-scale simulations (5 – 10 km) in a new operational framework. A significant achievement in Climate DT is the capability to automatically process the hourly model output with impact applications which provide insights for users. Such applications include for instance the analysis of flood risks, renewable energy generation and wildfire risks. Therefore, Climate DT data can provide direct insights into potential adaptation requirements. Additionally, the Climate DT runs with multiple climate models (IFS-FESOM, IFS-NEMO and ICON) which led to the implementation of a standardized data portfolio on HealPix meshes, further benefiting data users in analyzing the data.

In this presentation we will introduce the operational Climate DT framework as well as the workflow that enables us to perform the climate simulations with automatic post-processing by multiple applications including scientific evaluation. Other aspects that will be introduced are the standardized data-portfolio and the simulations that have been performed so far as part of Climate DT.

How to cite: Kiszler, T., Kontkanen, J., Sigurdsson, B., de Paula Kinoshita, B., Bretonniere, P.-A., Narayanappa, D., Acosta, M., Polade, S., Sievi-Korte, O., Jung, T., Klocke, D., Doblas-Reyes, F., Koldunov, N., Gaya-Àvila, A., von Hardenberg, J., Davini, P., Frueh, B., Thober, S., Milinski, S., and Roura Adserias, F. and the Climate DT team: From climate simulations directly to actionable insights: The Climate Change Digital Twin, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-21582, https://doi.org/10.5194/egusphere-egu26-21582, 2026.

11:05–11:15
|
EGU26-1780
|
On-site presentation
Narayanappa Devaraju, Jenni Kontkanen, Jenni Poutanen, Juha Tonttila, Hendryk Bockelmann, Hauke Schmidt, Nikolay Koldunov, Daniel Klocke, Etienne Tourigny, Maria Giuffrida, Harri Kokkola, Thomas Zwinger, Mario Acosta, Anton Laakso, and Sara Garavelli

High-resolution, kilometer-scale information on regional climate impacts is critical for effective adaptation and mitigation strategies. The European Commission’s Destination Earth (DestinE) Climate Adaptation Digital Twin (Climate DT) aims to address this need; however, actionable impact assessments remain limited by incomplete representation of key Earth system components and their interactions. The Horizon Europe funded TerraDT project tackles these limitations by developing a state of the art Digital Twin focused on the cryosphere, land surface, aerosols, and their coupled processes, fully interoperable within the DestinE ecosystem.

TerraDT pursues three objectives: (1) build and deploy new Digital Twin Components (DTCs) to strengthen process realism and enable impact assessments; (2) deliver a modular, scalable, interoperable platform integrating advanced software, high-performance computing, and data workflows that can host physical models and Artificial Intelligence (AI)/Machine Learning (ML) emulators; and (3) foster user uptake through early engagement and a User centric Interface (UI).

In its first year, TerraDT achieved several milestones:

  • Cryosphere: A prototype Land-Ice DTC was established by coupling Elmer/Ice with ICON climate model via YAC coupler, supported by curated glacier dynamics datasets. Development of the Sea-Ice DTC (FESIM) began in mid-2025, including YAC-mediated coupling and an AI sea-ice emulator capable of ~100-day to multi-year rollouts, producing smoother fields than physical models. 
  • Land Surface: A prototype time-varying land use dataset was generated for ECland and ICON land surface models. 
  • Aerosols: A simplified Aerosol DTC was tested, with integration into (open) Integrated Forecasting System (IFS). ML components were prototyped in HAM-LITE to capture advanced aerosol physics (e.g., hygroscopicity) at reduced computational cost.

Impact modelling advanced across multiple domains:

  • Sea-ice: Assessments of ice season duration, severe condition probabilities.
  • Forest: Integration of 3PG and Prebasso models, calibration across European ecosystems, ML emulation of Prebasso, and characterization of old-growth forests.
  • Urban: A carbon-sequestration emulator validated in Helsinki, with planned extensions to Lisbon, Barcelona, Munich, Paris, and Zurich. Key data sets required are prepared in combination with ML methods, and will be applied to build advanced Urban impact models for assessing climate extremes.

Infrastructure and interoperability were strengthened through YAC based coupling (ICON-Energy Balance Firn Model-Elmer/Ice on LUMI and Levante Supercomputers), and Sea Ice DTC I/O plans were aligned with DestinE workflows. A map-based UI architecture was designed to expose high resolution impact assessments for decision support.

By advancing new DTCs, AI/ML emulators, and generic coupling interface, TerraDT is being developed for full integration into the DestinE framework, ensuring compatibility and enhancing the overall ecosystem’s capability to inform climate adaptation and mitigation strategies. This presentation will summarize first year progress, outline objectives, and present the roadmap toward fully coupled simulations, validation, and dissemination of impact indicators through TerraDT UI for policy and stakeholder communities.

How to cite: Devaraju, N., Kontkanen, J., Poutanen, J., Tonttila, J., Bockelmann, H., Schmidt, H., Koldunov, N., Klocke, D., Tourigny, E., Giuffrida, M., Kokkola, H., Zwinger, T., Acosta, M., Laakso, A., and Garavelli, S.: Developing a new Digital Twin for Destination Earth: Technical Progress of TerraDT in its First Year, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1780, https://doi.org/10.5194/egusphere-egu26-1780, 2026.

11:15–11:25
|
EGU26-10764
|
ECS
|
On-site presentation
Omid Ghorbanzadeh and Alessandro Crivellari

Digital Twins of the Earth are required to represent future scenario- and trajectory-based hazards that obey physical laws and realistic dynamics in an interpretable and actionable manner, understandable not only by experts but also by non-expert stakeholders and local authorities, to support efficient decision-making, adaptation planning, and emergency management. Machine learning has substantially advanced generating landslide susceptibility maps (LSM). However, LSMs typically provide static, abstract, expert-oriented snapshots that are difficult for non-expert audiences to interpret and are poorly aligned with the interactive, immersive visualization needs of Digital Twin and Augmented Reality (AR)/Virtual Reality (VR) environments, thereby limiting their effectiveness for anticipatory risk communication and decision support.

We present a physics-aware generative framework that transforms predictive landslide modeling into photorealistic satellite imagery of future events, enabling intuitive “what-if” hazard exploration within Digital Twin architectures.

Our approach integrates Landslide Physics-Aware Neural Networks (LPANNs) with conditional Generative Adversarial Networks (GANs) to generate synthetic, post-event satellite images. These generate synthetic images conditioned on multi-attribute probability maps (physics-informed predictions) resulting from embedding geotechnical, hydrological, geomorphological, and geometric constraints, ensuring physical plausibility. Our developed conditional GAN is trained based on pre- and post-event real images, with annotated landslide areas. Different supervised and self-supervised deep learning are used for large-scale landslide detection.

By conditioning generative part of the approach on physics-informed predictions, the proposed Digital Twin component mitigates hallucinations typical of generative AI and synthetic images and trustworthy hazard visualizations. The resulting synthetic imagery is scenario-consistent and bridges the gap between numerical susceptibility outputs and human-centered decision support, enhancing interpretability for policymakers, emergency managers, and non-expert stakeholders.

 

How to cite: Ghorbanzadeh, O. and Crivellari, A.: From Physics-Aware AI to Digital Twins: Generating Photorealistic Satellite Imagery of Future Landslides for Predictive Hazard Scenarios, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-10764, https://doi.org/10.5194/egusphere-egu26-10764, 2026.

11:25–11:35
|
EGU26-20401
|
ECS
|
On-site presentation
Nawwar Procheta, Mila N. Koeva, and Rosa R. Aguilar

Jordan’s dryland watersheds face acute water stress alongside increasing land degradation. Intense, short-duration storms cause flash runoff that accelerates soil erosion and sediment delivery to downstream infrastructure while groundwater, which is Jordan’s primary strategic water source, remains under long-term pressure. Rainwater-harvesting (RWH) interventions, including Vallerani micro catchments on damaged hillslopes, and Marab/flood-spreading and check-dam systems along ephemeral waterways, are increasingly used in restoration efforts. However, basin-scale planning is often limited by uncertainties in hydrological trade-offs and a gap between model outputs and stakeholder-ready, spatially explicit decision support.

This study develops a basin-scale hydrological Digital Twin (DT) for the Mujib Basin located in central Jordan by transforming process-based simulation findings into an interactive, scenario-driven dashboard. The DT combines a hydrological modelling core (SWAT) with harmonized in-situ and Earth Observation (EO) datasets to represent both water and land-surface responses. Physiographic inputs such as topography, soils, and land use, together with meteorological forcing derived from ERA5 reanalysis, and complemented by EO time series including Sentinel-2 vegetation indices, evapotranspiration products, and soil moisture to support the ecohydrological context.

Four intervention scenarios are represented - baseline, Vallerani, Marab, and combined - and evaluated using indicators relevant to water security, including surface runoff, sediment yield, and groundwater recharge, alongside vegetation/ET-related metrics. Outputs are produced at the sub-basin level and visualized through a web-based 3D dashboard, allowing users to visualize and compare different scenarios. The DT also enables "what-if" scenario testing by combining suitability-driven intervention placement with adjustable weather perturbations, allowing users to explore combined management and climate futures.

Beyond single-variable maps, the DT adds a decision layer for intervention targeting through a composite appropriateness framework matched with actual restoration goals: (1) Marab/check-dam suitability, which emphasizes high runoff generation, terrain controls, and proximity to channel networks; (2) infiltration-focused suitability, which highlights zones where slowing and spreading flow can increase recharge.  This study shows how digital twins can support hydrological decision-making in data-scarce dryland settings by bridging modelling outputs and implementation-oriented planning, usin Mujib Basin as a case study.

How to cite: Procheta, N., Koeva, M. N., and Aguilar, R. R.: Developing a Digital Twin Framework for Watershed Restoration Scenario Analysis:A Case Study in Mujib Basin, Jordan, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-20401, https://doi.org/10.5194/egusphere-egu26-20401, 2026.

11:35–11:45
|
EGU26-1492
|
On-site presentation
John M. Aiken, Dunyu Liu, William Gilpin, and Thorsten Becker

Earth science data are typically highly heterogeneous which leads to mixed determined inverse problems and poses challenges to extract process-level information. For example, ocean sediment cores from the International Ocean Discovery Program (IODP) contain hundreds of millions of measurements across multiple geophysical properties, but usable datasets are only 5-10% complete due to missing data. We present a semi-supervised variational autoencoder with masked encoding that simultaneously imputes missing measurements and predicts lithology, enabling more complete utilization of legacy IODP archives. We train a masked variational autoencoder on the LILY database (89 km of core, 34 million observations, 42 IODP missions) to learn joint distributions across bulk density, magnetic susceptibility, RGB reflectance, and natural gamma ray attenuation. The model uses selective masking during training to learn imputation strategies for missing modalities. Crucially, the learned latent representations are constrained to recover lithological labels from unseen cores without retraining. We demonstrate that the model both captures the nonlinearities contained in the training data and is able to reconstruct the test data (R2_avg=0.86) and that data lithology (AUC_avg=0.9), while also providing descriptive embedding vectors (ARI=0.2). Additionally, the underlying data contains strong non-linear relationships that are not captured by simpler models on reconstruction (e.g., a typical LASSO-based regression (R2=0.24)). Our work represents a step towards scalable cross-modal assimilation and representation of existing earth datasets.

How to cite: Aiken, J. M., Liu, D., Gilpin, W., and Becker, T.: A multi-modal semi-supervised model for ocean sediment lithology, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1492, https://doi.org/10.5194/egusphere-egu26-1492, 2026.

11:45–11:55
|
EGU26-19643
|
On-site presentation
Vasily Demyanov and Oleksandr Letychevskyi

One of the key challenges addressed by digital twins (DT) is the long-term modelling and monitoring of subsurface system behaviour. Existing DT technologies primarily rely on physics-based models capable of simulating dynamic processes. Long-term forecasting often suffers from uncertainty in data, modelling equations and their parameters, initial conditions and accumulating errors.

DT for natural systems remains an unexplored opportunity at a juvenile stage. Challenges with DT design for natural systems are largely related to their complex and uncertainty multi-physics nature.

We propose algebraic approach for DT design, where system parameters/attributes are represented as constraints rather than as specific values. This approach enables generation of subsurface scenarios and analysis of possible occurrence of critical system states/event.

We model the system as a collection of interacting entities (agents), whose states are defined by sets of attributes. For instance, a geological layer is considered as an agent characterised by its geometry represented by a 3D mesh (X0) , elasticity (E) , porosity (φ), thermal conductivity (T), and other relevant attributes. The initial state S0 of the agent can be presented as a set of constraints.

S0:        E1≤E≤E2 Ʌ  F1≤φ≤F2 Ʌ T1≤T≤T2 Ʌ X0 ,

The geometry X0 can also be represented as a set of constraints that take into account structural/mesh uncertainty. Thus, constraints can be specified for the set of all agents/layers interacting with each other.

We define the semantics of the agent's actions using formalized transitions that changes the constraints on the attributes/agent's state. An example of such a transition is the change in the layer state according to a function that is constructed from a combination of the equilibrium equations F, the constitutive equation Q, which relates the stress σ and the strain ε, and the kinematic equation of the strain D.

S1=G(S0, F(X0,σ), Q(φ,E,T), D(X0, ε)) .

The next state S1 is determined by the change of the agent state with the modelling of this transition and also represents the conjunction of constraints. The resulting new state is checked for compatibility with the critical state Z(σ, σmax) following the threshold constraint (eg fracture):

σ <= σmax .

If conjunction  S1 Ʌ Z(σ, σmax) is satisfied, then there are such layer attribute values for which it is true. Such attributes are represented by the corresponding constraints generated by the solver. Having such constraints, we can obtain scenarios by the method of backward modelling, which will lead to the initial state.

Formalized transitions can be built by considering other parallel processes that affect the change in the state of the agent, in particular thermal, chemical, fluid flow.

This approach increases capability for long-term forecasting because it operates with subsurface states/events constraints/conditions rather than parameter specific simulations.

DT can combine algebraic modelling with neural networks that classify the predictions of a certain event. Algebraic modelling of the agent's behaviour from the classified state will confirm the correctness of the classification and build the corresponding explanatory scenario.

How to cite: Demyanov, V. and Letychevskyi, O.: Digital twins for subsurface systems based on algebraic models, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-19643, https://doi.org/10.5194/egusphere-egu26-19643, 2026.

11:55–12:05
|
EGU26-11702
|
ECS
|
On-site presentation
Jose Dario Rodriguez, Kris Piessens, and Kris Welkenhuysen

Deep aquifers offer significant potential for diverse energy and storage applications, these opportunities also will require synergistic multi-user subsurface management. To maximize these resources, operators require flexible modeling tools capable of rapidly evaluating how independent but concurrent projects might interact hydraulically over time. Traditional grid-based numerical models are robust but can be computationally demanding when rapid scenario testing is required across large, heterogeneous regions. We propose a modular Physics-Informed Neural Network (PINN) framework designed to provide a flexible, faster alternative for evaluating regional pressure interference between co-located subsurface activities.

Our proposed architecture treats the aquifer as a continuous volumetric field. We define injection and extraction points as dynamic operational conditions (e.g., transient rate or pressure constraints) that can be positioned anywhere in the domain. The neural network is trained to satisfy the 3D transient diffusivity equation, learning to map the relationship between these sources and the resulting pressure field without relying on fixed meshes We address this by introducing a  "modular" architecture: by training separate sub-networks for each activity type, we aim to mathematically isolate or "de-mix" the pressure contribution of specific projects from the total regional signal.

This research focuses on a case study in the Campine Basin (Belgium). We are developing the framework to infer effective aquifer properties from sparse historical monitoring data and to simulate interference patterns specifically between gas storage and geothermal operations. The expected outcome is a spatial scenario analysis tool that allows future users to dynamically test new project locations and optimize setback distances within a Subsurface Digital Twin environment. By decoupling the geological parameterization from specific well locations, we aim to provide a scalable engine that supports adaptive planning and de-risks decision-making in multi-activity aquifers.

How to cite: Rodriguez, J. D., Piessens, K., and Welkenhuysen, K.: A Modular Physics-Informed Neural Network Framework for Quantifying Pressure Interference Between Concurrent Deep Subsurface Activities, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-11702, https://doi.org/10.5194/egusphere-egu26-11702, 2026.

12:05–12:15
|
EGU26-15220
|
ECS
|
On-site presentation
Raja Ram Aryal, Timothy Devereux, Josh Rivory, Glen Eaton, Stuart Phinn, and William Woodgate

Accurately representing three-dimensional (3D) canopy structure is essential for Earth System Models (ESMs) and radiative transfer schemes that link vegetation to climate–carbon feedback. Leaf area density (LAD) and related structural metrics are widely retrieved from remote sensing using Beer–Lambert (BL) transmittance inversions, yet these approaches commonly assume randomly distributed foliage and woody material. In real canopies, plant material is spatially aggregated (clumped), violating random mixing and introducing systematic LAD bias. Although clumping has been corrected using canopy or crown scale clumping indices (CI), voxel-based LAD retrievals from terrestrial laser scanning (TLS) and other 3D sensing approaches require clumping information that is defined at the same spatial scale as the inversion. The lack of a physically grounded voxel-resolved CI remains a key methodological gap, particularly for dense and heterogeneous canopy regions.

 

Here, we develop a voxel-scale effective reference clumping index (CI_ref) retrieval method that is structurally consistent with voxel-based BL retrievals. We used digital twin 3D tree meshes from the RAMI-V benchmark forest scenes, spanning six contrasting crown forms and six leaf inclination angle distribution (LIAD) variants (36 canopy geometries). Each tree was partitioned into regular voxel grids at four sizes (0.2, 0.5, 1.0, and 2.0 m). Within each voxel, we performed multi-directional (18 bin viewing angle) ray tracing on every voxel-clipped mesh to directly quantify within-voxel gap probability, leaf projection function G(θ), and path-length statistics required for transmittance-based LAD inference. Directional CI estimates were derived for each viewing angle and then aggregated through a hierarchical pooling strategy that reduces sampling noise and directional variability (all angles → azimuth pooled → zenith-pooled). This procedure yields a single, robust CI_ref per voxel that is independent of viewing angle and suitable as a reference label for operational LAD retrieval algorithm development from LiDAR data.

We then quantified the practical impact of voxel-scale clumping correction on BL LAD retrieval using simulated TLS point clouds. LAD was estimated per voxel under two assumptions: (i) the conventional random-foliage case (CI = 1) and (ii) clumping-corrected inversion using CI_ref. Across all crown forms, LIAD variants, and voxel sizes, the CI = 1 assumption produced predominantly negative LAD errors relative to mesh-derived reference LAD, consistent with systematic underestimation when clumping is ignored. Incorporating CI_ref shifted LAD errors toward zero and improved agreement, evidenced by reduced bias and normalized RMSE. Improvements were most pronounced for planophile canopies, where directional foliage aggregation is strongest and for coarser voxel sizes (1.0–2.0 m), where greater within-voxel heterogeneity amplifies departures from random mixing, demonstrating that clumping-induced bias is strongly scale dependent.

These results provide practical recommendations for 3D canopy modelling: specifically, that voxel-scale clumping correction becomes increasingly essential as voxel size increases, especially when within-voxel heterogeneity grows. The proposed CI_ref framework strengthens scale consistency between local canopy structure and voxel-based radiative transfer, enabling unbiased LAD retrievals and providing physically grounded labels for future deep learning model-based CI prediction from TLS point clouds.

How to cite: Aryal, R. R., Devereux, T., Rivory, J., Eaton, G., Phinn, S., and Woodgate, W.: Digital twin–based voxel-scale clumping index (CI) improves leaf area density (LAD) retrieval from simulated terrestrial laser scanning (TLS), EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-15220, https://doi.org/10.5194/egusphere-egu26-15220, 2026.

12:15–12:25
|
EGU26-10926
|
On-site presentation
Mauro Cacace, Marzieh Baes, Jan von Harten, Alexander Lüpges, Denise Degen, Jan Niederau, Tobias Rolf, Magdalena Scheck-Wenderoth, Florian Wellmann, Bernhard Rumpe, Nora Koltzer, and Simon Virgo

WBGeo (WorkBench for Digital Geosystems) aims at automating the workflow from geological data integration to structural modeling, mesh generation, numerical simulation, and visualization. The framework is designed as a collaborative project, enabling the systematic and reproducible development of geoscientific models while reducing manual intervention across the entire modeling pipeline.

One of the core components of WBGeo is the generation of computational meshes tailored to complex geoscientific workflows. The framework supports three mesh representations: implicit structured meshes, explicit structured meshes, and explicit unstructured meshes. This flexible design allows users to select an appropriate meshing strategy based on model complexity, data availability, and computational requirements.

Implicit structured meshes are generated from volumetric structural models in which lithological information is defined on a regular grid. The meshing procedure operates directly on the implicit representation of the structural geological model and produces a structured hexahedral mesh suitable for numerical simulations based on finite element or finite volume/difference methods.

For explicit structured meshes, vertices are first extracted directly from the geological surfaces as provided by the structural model. Each geological layer is first discretized using a uniform, user-defined number of interpolated points to ensure consistent lateral resolution across all layers. Subsequently, vertical refinement between adjacent layers is performed using a user-defined number of subdivisions, allowing controlled resolution along the depth direction. To preserve mesh quality and avoid numerical instabilities, the minimum vertical distance between corresponding points in adjacent layers is evaluated using a user-defined threshold. If this distance falls below the specified limit, one of the points is adjusted vertically by a predefined amount to enforce the minimum separation. Following this correction step, hexahedral elements are constructed, resulting in a structured mesh suitable for efficient numerical simulations.

For explicit unstructured meshes, vertices are obtained directly from the structural model geometry. The surfaces are then interpolated and discretized, and the resulting geometry is passed to the Gmsh Python API for mesh generation. After determining intersections between surfaces and performing geometric fragmentation, tetrahedral elements are generated. One of the main components of unstructured meshes in the workflow is the inclusion of fault planes, and engineering objects such as wells, mining shafts, point sources, or additional internal planes, which are difficult to represent within a structured mesh framework.

By supporting both structured and unstructured meshing strategies within a unified workflow, WBGeo enables users to balance computational efficiency and geometric complexity while maintaining reproducibility and consistency across geosystem modeling applications. The generated meshes can be exported to different formats such as exodus, abaqus, feflow to be used by different commercial and open-source simulation packages.

How to cite: Cacace, M., Baes, M., von Harten, J., Lüpges, A., Degen, D., Niederau, J., Rolf, T., Scheck-Wenderoth, M., Wellmann, F., Rumpe, B., Koltzer, N., and Virgo, S.: WBGeo: An Automated Framework for Geosystem Modeling with Advanced Mesh Generation Capabilities, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-10926, https://doi.org/10.5194/egusphere-egu26-10926, 2026.

Posters on site: Wed, 6 May, 08:30–10:15 | Hall X4

The posters scheduled for on-site presentation are only visible in the poster hall in Vienna. If authors uploaded their presentation files, these files are linked from the abstracts below.
Display time: Wed, 6 May, 08:30–12:30
Chairpersons: Lorenzo Nava, Florian Wellmann, Denise Degen
X4.105
|
EGU26-16891
Théophile Lohier, Antoine Armandine les Landes, Jeremy Rohmer, and Romain Chassagne

Subsurface Digital Twins rely critically on assimilation of data frameworks to continuously integrate multi-source, multi-type observations. While numerous methods have been developed to improve quantitative subsurface predictions, there is currently no clear consensus or standardised guidance on their appropriate computational deployment within digital twin workflows. Instead, research communities often adopt specific algorithms primarily because they are prevalent within their discipline, rather than because they are demonstrably optimal for the problem at hand. This lack of consensus reflects our limited understanding of how to rigorously characterise the mathematical structure of subsurface assimilation problems involving coupled multi-physics processes, multiple spatial and temporal scales, and heterogeneous data streams. As a result, current efforts frequently focus on empirical experimentation with algorithms rather than on the design of problem-adapted methodologies. This challenge extends to the formulation of the inverse problem itself, including parameterisation, parameter ranges, objective functions, and performance metrics, as well as to the selection of optimisation or inference strategies in multi-source data environments. Furthermore, comprehensive uncertainty quantification through global multi-factor sensitivity analysis is often infeasible due to the prohibitive computational cost of large-scale problems. To address these challenges, we propose Orison, a modular data assimilation environment designed to support systematic benchmarking and comparative analysis of classical model update algorithms for subsurface digital twin workflows. Orison enables controlled experimentation across a range of thematical problems, facilitating insight into algorithm performance and robustness. We demonstrate the capabilities of Orison through representative case studies in geothermal systems and groundwater management, illustrating how such a benchmarking framework can support more transparent methodological choices and contribute to the development of reliable, pragmatical subsurface digital twins.

How to cite: Lohier, T., Armandine les Landes, A., Rohmer, J., and Chassagne, R.: Orison: A modular data assimilation environment for subsurface digital twins, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-16891, https://doi.org/10.5194/egusphere-egu26-16891, 2026.

X4.106
|
EGU26-1624
|
ECS
Alexandra Duckstein, Solveig Pospiech, Vinzenz Brendler, Frank Bok, Raimon Tolosana-Delgado, Elmar Plischke, and Mostafa Abdelhafiz

Deep geological repositories rely on robust, transparent, and scientifically based safety concepts to ensure the long-term safety of radioactive waste. As safety cases become increasingly data-rich and computationally integrated, Digital Twins are emerging as a powerful tool to represent, test, and communicate the behavior of complex geosystems over geological timescales. A core requirement for such Digital Twins is the explicit quantification of parameter uncertainties and sensitivities, ensuring that the model is both reliable and efficient in reproducing key safety functions.

In this contribution, we introduce a workflow designed to assess uncertainties and sensitivities associated with radionuclide retention in geological host formations. Our approach combines geostatistical as well as geochemical simulation and global sensitivity analysis. Mineralogical heterogeneity is represented using geostatistical realizations generated through custom Python implementations of Markov-chain methods and truncated Gaussian random field simulations, producing spatially realistic mineral distributions. These mineralogical scenarios are then propagated through a geochemical modelling step using Geochemist Workbench, in which the distribution coefficient (Kd) is computed for each realization to quantify the effect of mineralogical and geochemical variability on uranium retention.

To identify the key indicators of variability, the workflow incorporates variance-based sensitivity analysis (SA) based on a custom Python toolbox. The SA reveals both first- and second-order effects, highlighting the influence of individual parameters on the resulting Kd values as well as pairwise parameter interactions. In almost all cases, the identified sensitivities and interactions can be explained by underlying chemical and physical processes. Additionally, this approach enables targeted dimensionality reduction, a critical step for constructing Digital Twins that maintain scientific robustness while remaining computationally tractable.

The workflow is presented for crystalline host rocks, where we focus on uranium retention within granitic systems governed by solid–liquid interactions: sorption, aqueous speciation, precipitation, and dissolution. A key advantage of our workflow is its modular structure. Each component, geostatistical simulation, geochemical modelling, and sensitivity analysis, can be independently adapted, extended, or replaced. This makes the framework readily transferable to other host rocks such as salt or clay, which exhibit fundamentally different retention mechanisms, as well as to other radionuclides with distinct sorption, solubility, or redox characteristics.

Our results highlight (i) the magnitude of uncertainty introduced by mineralogical heterogeneity, (ii) the non-linear sensitivity of uranium retention to coupled mineral–solution systems, and (iii) the potential to substantially reduce model complexity by focusing on a small subset of high-impact parameters. Overall, the workflow provides a structured and scalable method for quantifying uncertainties and identifying the parameters most relevant to long-term safety. In this way, it provides the essential, uncertainty-aware input data required for the generation of reliable and computationally efficient Digital Twins in geological disposal scenarios.

How to cite: Duckstein, A., Pospiech, S., Brendler, V., Bok, F., Tolosana-Delgado, R., Plischke, E., and Abdelhafiz, M.: Towards Digital Twins: Uncertainty and Sensitivity Analysis for Safety-Case Modelling, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1624, https://doi.org/10.5194/egusphere-egu26-1624, 2026.

X4.107
|
EGU26-3381
|
ECS
Smruthi Ravichandran, Solveig Pospiech, Vinzenz Brendler, and Guido Juckeland

The long-term safety of Deep Geological Repositories (DGRs) requires rigorous assessments capable of predicting radionuclide transport over million year timescales. While the Digital Twin (DT) concept offers a robust framework for such assessments, the traditional requirement for bidirectional, real time communication is currently unfeasible due to the absence of active physical repositories. We propose a modular DT prototype application framework designed to evolve from a high fidelity simulation environment into a fully synchronized system as field data emerge.

At its core, this framework utilizes standardized data schemas to harmonize heterogeneous, site-specific field data from crystalline host rock including mineral composition, pore water chemistry, and surface properties. These standardized datasets are integrated via a specialized API into a modular orchestration pipeline that connects 1D and 2D fracture simulations with reactive transport codes such as PHAST, OpenGeoSys, and PFlotran. By containerizing these secondary physics models into Docker environments, the framework ensures high computational flexibility and reproducibility. This approach allows for the seamless integration of Machine Learning models and complex physics-based workflows while maintaining isolated execution environments.

Acknowledging the post-closure reality of a DGR where sensors may fail or lose power supply this framework prioritizes the characterization of source term evolution (radionuclide fluxes) through a "build fill close abandon" logic. Current  focus is on building features to establish resilient data formats and interface protocols to create a future proof foundation for geological safety. We demonstrate how containerization and robust interface design can transform divergent research projects into a unified, reproducible DT framework, applicable to any domain where long term predictive modeling is required despite limited real-time data.

How to cite: Ravichandran, S., Pospiech, S., Brendler, V., and Juckeland, G.: An Initial Digital Twin Architecture for Long-Term Radionuclide Transport Modeling in Deep Geological Repositories, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3381, https://doi.org/10.5194/egusphere-egu26-3381, 2026.

X4.108
|
EGU26-3775
Steffi Urhausen, Deborah Hemming, Deanne Brettle, Emma Ferranti, and Sarah Greenham

The EU CARMINE project (https://carmine-project.eu/) aims to support urban and surrounding metropolitan communities to become more climate resilient. The project focuses on heat, wildfires, flooding, pollution and drought across eight case study areas in Europe. Birmingham, located within the West Midlands Combined Authority (WMCA), serves as the UK case study area. High priority climate hazards for Birmingham are extreme heat, as well as pluvial flooding caused by extreme precipitation events. Increasing urban tree cover to alleviate these hazards could be a promising nature-based solution. However, a large amount of newly planted trees tends to wilt or die due to drought stress.

To assist the Council and community volunteers in maintaining the young trees during drought events, we are developing a digital twin framework to identify when and where young trees across Birmingham need watering. Common indicators include daily plant available water and the level of drought/wetness for the last few weeks. These indicators are based on soil moisture content, usually at different depths. Unfortunately, such measurements are sparse or absent in urban areas. We use the Joint UK Land-Environment Simulator (JULES) model forced by the UK weather forecasting model UKV at a spatial resolution of 1.5km, to estimate soil moisture content. Using machine learning techniques, we emulate JULES outputs to provide soil moisture estimations in a faster, more efficient and more flexible way. Platforms developed through the CARMINE project allow us to communicate the need for watering to interested communities. This approach is an important step to support communities and city authorities to improve the management of urban trees and resilience of cities to climate hazards like heat waves and flooding.

This approach explores how a digital twin, combined with an emulation of JULES soil moisture using ML techniques, could provide drought information for young trees more efficiently. It has the potential to scale beyond the case study area of Birmingham and transfer the digital twin to other urban areas.

How to cite: Urhausen, S., Hemming, D., Brettle, D., Ferranti, E., and Greenham, S.: Developing a Digital Twin to support the resilience of young trees to drought, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3775, https://doi.org/10.5194/egusphere-egu26-3775, 2026.

X4.109
|
EGU26-5493
|
ECS
Chia-Wei Hsu, Jun-Jun Su, Rui-Zhen Yang, Candera Wijaya, Yu-Cheng Chen, Shih-Chun Candice Lung, Ta-Chih Hsiao, Chao-Hung Lin, and Chih-Da Wu

This study developed a Geospatial Artificial Intelligence (Geo-AI)–based framework to estimate and visualize the three-dimensional (3-D) distribution of ultrafine particles (PM₀.₁) and associated population exposure across Taichung City, Taiwan. An unmanned aerial vehicle (UAV) platform equipped with a P-Trak Ultrafine Particle Counter was deployed to collect high-resolution 3-D PM₀.₁ concentration data across varying altitudes and land-use types. These 3-D PM₀.₁ data were integrated with multi-source geospatial datasets, including 3-D building models, meteorological variables, and emission inventories. The SHapley Additive exPlanations (SHAP) method was then employed to identify key predictors for machine-learning modeling. The optimized model was applied to map the continuous 3-D pollution field and used to estimate and visualize population exposure for each floor level. The resulting Geo-AI model achieved strong predictive performance, with R² values of 0.95 for training and above 0.85 for validation, demonstrating high robustness and predictive capability. Visualizations reveal a nonlinear vertical structure of PM₀.₁ in 3-D space, characterized by near-ground peaks in industrial and traffic zones alongside persistent localized hotspots at mid-to-high elevations. Population exposure assessments highlighted that, despite lower concentrations at higher elevations, the total exposure burden remains significant in mid-to-high-rise residential buildings due to higher population density. This research presents an advanced framework for assessing 3-D air pollution exposure risks in dense urban environments, demonstrating the potential of Digital Twin technologies in supporting air quality management and public health decision-making.

How to cite: Hsu, C.-W., Su, J.-J., Yang, R.-Z., Wijaya, C., Chen, Y.-C., Lung, S.-C. C., Hsiao, T.-C., Lin, C.-H., and Wu, C.-D.: Geo-AI-Based Assessment of 3-D Ultrafine Particle Distribution and Population Exposure: A Digital Twin Approach in Taichung, Taiwan, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-5493, https://doi.org/10.5194/egusphere-egu26-5493, 2026.

X4.110
|
EGU26-6409
|
ECS
Qi Zhou, Hui Tang, Jacob Hirschberg, and Fabian Walter

Sediment transport is a fundamental process shaping landscapes and posing significant hazards in mountainous regions. However, traditional field monitoring and simulation approaches, such as grain size sampling and numerical modeling, are often costly and time consuming. Recent advances in physics-based models and machine learning have substantially improved spatial and temporal resolution. These achievements enable the development of digital twins to explore what-if scenarios and to better understand the dynamic processes involved.

In this work, we combine the probabilistic sediment cascade model (SedCas) with the machine learning–based event detection model (Flow-Alert) to develop a digital twin of a catchment. The former relies solely on climate forcing to simulate sediment dynamics, whereas the latter uses seismic signals to identify extreme sediment transport events, such as debris flows. We address three key questions. First, how to design a digital twin framework that captures the physical components of sediment transport, including erosion on hillslopes, hillslope to channel transfer, and channel transport to the catchment outlet, at hourly and even sub hourly temporal resolution. Second, how to fuse predictions from the physics-based model SedCas and the machine learning based model Flow-Alert to merge and balance the strengths of these two modeling approaches. Third, how to reduce uncertainty when translating insights from the virtual entity back to the physical entity. We demonstrate that the digital twin framework enables potential users, such as governmental agencies and local stakeholders, to explore what if scenarios and better understand how climate change and human interventions influence sediment transport dynamics.

How to cite: Zhou, Q., Tang, H., Hirschberg, J., and Walter, F.: stTwin: A digital twin framework for catchment-scale sediments transport, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-6409, https://doi.org/10.5194/egusphere-egu26-6409, 2026.

X4.111
|
EGU26-9426
Kosei Tomami, Atsushi Okamoto, and Toshiaki Omori
As one of the applications of X-ray computed tomography (X-ray CT) to geomaterials, rock CT images have been widely applied in earth and environmental sciences. However, the rock CT images have a low-resolution problem in the depth direction due to multiple causes such as physical characteristics of the rock core samples, geometric constraints of the imaging environments, and limitations in measurement in X-ray CT scanners. In this study, we propose a data-driven super-resolution based on generative modeling to improve the depth resolution of the rock CT images. Our proposed method solves the low-resolution problem as conditional generation by latent diffusion models which are a class of generative models. When we assume three consecutive images at different depth levels, a second image (an unobservable rock CT image) is generated from a first image and a third image (observable rock CT images) in our method. We verify the effectiveness of the proposed method by using actual rock CT images obtained in Oman Drilling Project, which is one of the international scientific research projects. The experimental results demonstrate advantages in the performance of our method in both qualitative and quantitative aspects compared to conventional interpolation methods.

How to cite: Tomami, K., Okamoto, A., and Omori, T.: Depth super-resolution of rock CT images based on latent diffusion models by deep learning, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-9426, https://doi.org/10.5194/egusphere-egu26-9426, 2026.

X4.112
|
EGU26-11263
|
ECS
Marthe Faber, Mauro Cacace, and Denise Degen

Accurate and efficient modelling of geothermal reservoirs is important for sustainable energy production and for the reliable assessment of operational risks. Predicting thermo-hydraulic (TH) system evolution under varying injection and production scenarios remains computationally challenging, particularly when physical knowledge of the subsurface system is incomplete and observational data are sparse. High-fidelity finite-element simulators are typically used to provide physics-based predictions of coupled flow and heat transport governed by complex partial-differential equations (PDEs). Such full-order simulations are, however, often prohibitively expensive for real-time forecasting, which is essential, for instance, in the context of digital twins.

Physics-based machine-learning (PBML) approaches, such as the non-intrusive reduced basis (NIRB) method address this challenge by constructing physics-consistent surrogate models that project full-order simulation outputs onto a low-dimensional subspace learned from representative snapshots. By retaining only the dominantbasis functions, the NIRB surrogate enables orders-of-magnitude speedup in parametric predictions while staying consistent with the physical transport mechanisms and structural assumptions on fracture networks encoded in the full-order model. Despite these advantages, classical NIRB surrogates are intrinsically limited to the physical regimes represented by the governing PDEs, and consequently by the training simulations. If the surrogate does not fully capture the observed system behaviour, it is important to detect and adapt to missing or misrepresented local physics revealed by observational data, such as unmodeled convective heat transport or flow channelling arising from fracture activation.

To address this need, we propose a complementary residual-learning framework that augments a baseline NIRB surrogate with parameter-to-state maps of residual temperature and pressure fields learned by Kolmogorov-Arnold Networks (KANs). The residual, defined as the difference between observed data (or a synthetic reference solution) and the NIRB model prediction, is interpreted as a proxy for missing or misrepresented physics not explicitly captured by the baseline model. KANs represent mappings as sums of learned univariate functions and provide explicit access to the functional structure of parameter dependence. Thereby, KANs could act as interpretable discrepancy models by learning the residual between observations and NIRB predictions. By analysing the dominant functional families emerging in the learned residual, such as linear dependence characteristic of conduction-dominated regimes or exponential dependence associated with convection, KANs can provide diagnostic insight into missing thermo-hydraulic processes and their relevance across parameter regimes.

We validate the proposed approach synthetically by comparing a conduction-only NIRB surrogate against synthetic reference observations generated with an advection–diffusion model. We expect that KAN-based residual learning both improves predictive accuracy and reveals clear functional signatures of missing convective physics, even when only pointwise information is available. As an outlook, we aim to apply this workflow to real geothermal case studies, where sparse temperature and pressure measurements are available at well locations. In such settings, functional-family learning of residuals offers a promising pathway to improve surrogate predictions and to enhance the physical interpretability of geothermal systems, ultimately supporting more reliable assessments of reservoir behaviour.

How to cite: Faber, M., Cacace, M., and Degen, D.: Detecting Unrepresented Physics in Hybrid Machine Learning Surrogates of Geothermal Systems using Kolmogorov-Arnold Networks , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-11263, https://doi.org/10.5194/egusphere-egu26-11263, 2026.

X4.113
|
EGU26-16511
Matthias Volk, Jacob Alexander Frasunkiewicz, Patrick Laumann, and Atefeh Rahimi

Recently, the active Eifel volcanic region has received increasing interest due to the occurrence of deep low-frequency earthquakes, often interpreted as a sign of rising volatiles in the crust. Additionally, recent tomographic models have resolved vertically inclined low-velocity anomalies beneath the Laacher See volcano, which may indicate enhanced fluid ascent. These observations raise the question of whether volcanic activity in the region is increasing and whether such activity may be beneficial for geothermal exploration.

To address these questions, the DEGREE project is developing a digital laboratory that enhances predictive capabilities by combining geophysical data with geological and numerical models. The laboratory includes workflows that couple data assimilation, geological modeling, and numerical simulations into a single process. A key challenge is the propagation of uncertainties in the input data and parameters through the entire workflow. This allows us to obtain quantitive uncertainties for derived quantities to support decision making.

The foundation of the laboratory is a collection of diverse datasets compiled during the project. An extensive seismic dataset acquired by the Eifel Large-N network, deployed between September 2023 and September 2024, is used to investigate subsurface structure and active geodynamic processes in the Eifel region. We employ seismic tomography methods to resolve crustal thickness variations and velocity anomalies, together with moment tensor inversion to constrain fault geometries and deformation mechanisms.

Surface geological maps, digital elevation models, and geological cross-sections are used to build 3D structural geological models using the open-source software GemPy. Model construction follows a stepwise approach, starting from a simplified stratigraphic framework and gradually adding geological complexity, such as time-equivalent units and major fault structures. Although the steps are applied sequentially, the geological model is constructed from the input data and is therefore reproducible, enabling integration into subsequent workflow steps.

GemPy addresses uncertainty in geological models by generating ensembles of realizations through sampling input parameters from probability distributions. These ensembles serve as inputs for numerical simulations of physical quantities. Computing adjoint sensitivity kernels allows us to assess how each realization affects model outputs and to identify which models best match available observations, integrating structural uncertainty with process-based simulations. The numerical simulations are performed with LaMEM and its bindings for the Julia programming language. As GemPy is written in Python, the GemPy.jl package has been developed to expose its functionality in Julia.

The resulting geological and geophysical models may serve as the basis for a Play Fairway Analysis (PFA) which identifies regions with high potential for geothermal exploration. Crucially, this type of analysis requires uncertainty estimations for the modeled physical quantities, which our workflow provides.

From an implementation perspective, the digital laboratory consists of three main parts: a repository to collect data and models and their metadata, workflows and infrastructure for automatic processing, and an interface for visualization and interaction with the results. To demonstrate the feasibility, we develop the first prototype in JupyterLab which accommodates different computing environments and enables an interactive development process.

How to cite: Volk, M., Frasunkiewicz, J. A., Laumann, P., and Rahimi, A.: The DEGREE Project: A Digital Laboratory for Geothermal Exploration in the Eifel Region, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-16511, https://doi.org/10.5194/egusphere-egu26-16511, 2026.

X4.114
|
EGU26-17948
Yulia Gruzdeva, Denise Degen, and Mauro Cacace

A key prerequisite for reliable geoscientific process simulations is the calibration of uncertain model parameters against field observations. In practice, both measurements and simulation outputs are subject to uncertainty, arising from the observational errors, limited knowledge of material properties and inexact physical models. Bayesian inference provides a framework to explicitly acknowledge multiple sources of uncertainty by encoding modelling assumptions in prior distributions and updating them against observational data through the likelihood to obtain posterior estimates. However, applying Bayesian methods remains challenging in coupled multiphysical applications, including thermo-hydro-mechanical problems, as computational costs of repeated forward evaluations grow rapidly with model complexity.  

To address these limitations, we develop a hierarchical simulator for Bayesian calibrations that dynamically combines fast low-fidelity surrogate models with accurate high-fidelity finite-element simulations during the sampling stage. The core of the method stems from a fidelity-selection policy embedded directly in the probabilistic model, which transparently accounts for both surrogate-induced bias and the computational cost associated with high-fidelity simulations. We provide and compare several scenarios, that represent different optimization strategies for balancing posterior accuracy and computational efficiency. The resulting hierarchical Bayesian workflow is highly modular, and it can be coupled with external high-fidelity solvers through a unified forward interface and hence applicable to a wider range of geoscientific problems.

How to cite: Gruzdeva, Y., Degen, D., and Cacace, M.: A Hierarchical multi-fidelity approach for Bayesian inference for numerical process simulations , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-17948, https://doi.org/10.5194/egusphere-egu26-17948, 2026.

X4.115
|
EGU26-22140
|
ECS
Hussain Alfayez

Objectives:

In this study, we aim to develop a data-driven petrophysical inversion technique in the context of CO2 sequestration. By integrating reservoir flow simulation, petroelastic modeling, and Graph Neural Networks (GNNs), CO2 saturation can be estimated within models with multi-grid resolutions. The goal is to enhance accuracy and resolution adaptability in predicting CO2 plume behavior in subsurface geological formations, thus improving carbon capture and storage (CCS) strategies.

Methodology:

We generated 100 2-dimensional synthetic reservoir models using a sequential indicator simulation algorithm for facies simulation, each populated by heterogeneous porosity and permeability. Flow simulations were conducted for 11 years using a central well with a constant injection rate. Petroelastic modeling was then performed to compute changes in P-wave and S-wave velocities and density every six months. The models were resampled to mimic a varying resolution scenario, with higher resolution near the well. A GNN model handled multi-resolution inputs and outputs, representing each grid as a node linked to its nearest eight neighbors, using direction and distances as edge attributes.

Results, Observations, and Conclusions:

The integrated modeling approach successfully predicted CO2 plume migration within geological formations, demonstrating high predictive accuracy and robustness. Petroelastic modeling revealed significant changes in reservoir properties such as P-wave and S-wave velocities and density due to CO2 injection. The Graph Neural Network (GNN) model, optimized through hyperparameter tuning, effectively utilized these changes to predict CO₂ saturation with a Mean Squared Error (MSE) of 0.0217 and a Coefficient of Determination (R²) of 0.981, confirming its high reliability in practical scenarios. In comparison, the Multilayer Perceptron model (MLP) achieved an MSE of 0.0260 and an R2 of 0.9695, processing data without considering spatial connections, underscoring the GNN's superior computational efficiency and spatial data integration. Furthermore, visual assessments confirmed the model’s accuracy, closely aligning predicted and actual CO2 saturation levels, especially in dynamically changing reservoir zones. The study concludes that combining static property modeling, flow simulation, petroelastic modeling, and GNNs provides a valuable tool for enhancing CO₂ sequestration strategies, improving the prediction accuracy of CO₂ behavior in the subsurface, and significantly advancing CCS technologies.

Novel/Additive Information:

Our work leverages Graph Neural Networks (GNNs) to predict changes in CO2 saturation from elastic properties, integrating flow dynamics with petroelastic modeling and deep learning via adaptive meshing grids. This novel approach addresses the limitations of conventional neural networks in adapting to mesh variations. Our project uniquely targets the complex challenges of CO2 monitoring, advancing sequestration monitoring technologies by bridging seismic monitoring and dynamic flow simulation, providing a tool to predict CO2 saturation from elastic properties.

 

How to cite: Alfayez, H.: Physics-Informed Graph Neural Networks for Multi-Resolution CO₂ Saturation Estimation in Subsurface , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-22140, https://doi.org/10.5194/egusphere-egu26-22140, 2026.

Posters virtual: Mon, 4 May, 14:00–18:00 | vPoster spot A

The posters scheduled for virtual presentation are given in a hybrid format for on-site presentation, followed by virtual discussions on Zoom. Attendees are asked to meet the authors during the scheduled presentation & discussion time for live video chats; onsite attendees are invited to visit the virtual poster sessions at the vPoster spots (equal to PICO spots). If authors uploaded their presentation files, these files are also linked from the abstracts below. The button to access the Zoom meeting appears just before the time block starts.
Discussion time: Mon, 4 May, 16:15–18:00
Display time: Mon, 4 May, 14:00–18:00

EGU26-20253 | Posters virtual | VPS31

Progress of the Twin-ER project: pilot digital twin for earthquake risk 

Alejandra Staller, Jorge Gaspar-Escribano, Yolanda Torres, Sandra Martínez-Cuevas, José Juan Arranz, César García-Aranda, Teresa Iturrioz, and José Luis García Pallero and the Twin-ER Team
Mon, 04 May, 14:09–14:12 (CEST)   vPoster spot A

We present the progress of the project Twin-ER: Pilot Digital Twin for Earthquake Risk. The goal of the project is the integration of digital models of the city and the Earth into the structure of a digital twin, focused on seismic risk.

The Earth model includes the generation of new seismic source models based on maps correlating surface deformation and seismic activity rates. Deformation maps will be determined through the analysis of GNSS time series and InSAR images for several dates. Seismic activity rates will be calculated by combining statistical analyses of the seismic catalog with mechanical analyses of earthquake-related stress changes in the crust. The derived maps will show location-, magnitude-, and time-dependent activity rates. Seismic source models will form the basis for the development of seismic hazard maps and constitute the main component of the Earth model.

The city model integrates innovative exposure models based on Cadastral data, enhanced with machine learning and deep learning algorithms to identify building typologies and their seismic vulnerability. These analyses will incorporate data of different nature, such as cadastral reference value or exposure time to high temperatures, with the aim of extending the exposure to a multi-hazard and multi-risk context. The exposure and vulnerability models constitute the main component of the city model.

By combining seismic hazard models on one hand, and exposure and vulnerability models on the other, the seismic risk model will be obtained. This model represents the expected damage and losses in a city in the event of an earthquake. Therefore, it is a crucial piece of information for proposing risk mitigation measures and planning emergency response.

Both Earth and city models are embebed into the digital twin seismic risk. This digital twin is conceived in a pilot phase. The model will be fed with the results of risk simulations, which can be visualized in a web environment, leaving aspects of data loading automation from updated sensors or external servers and subsequent simulations with that updated data for future developments.

The project is applied in two study areas of similar size but different, complementary characteristics. One is southeastern Spain, where (1) seismic activity is moderate, and major earthquakes occur rarely, (2) cities have a relatively old building stock and are more vulnerable to earthquakes, and (3) the availability and accessibility to cadastral data are optimal. The other study area is El Salvador, where (1) there is high seismic activity with frequent large earthquakes, (2) cities have a relatively modern building stock with abundant informal construction, and (3) there is no free access to cadastral data.

 The advances presented here include the UML model of the entire digital twin, the seismic activity and deformation maps in SE Spain, and the city 3D models of two scenarios of application.

How to cite: Staller, A., Gaspar-Escribano, J., Torres, Y., Martínez-Cuevas, S., Arranz, J. J., García-Aranda, C., Iturrioz, T., and Pallero, J. L. G. and the Twin-ER Team: Progress of the Twin-ER project: pilot digital twin for earthquake risk, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-20253, https://doi.org/10.5194/egusphere-egu26-20253, 2026.

Please check your login data.