NH6.4 | Enhancing Resilience to Climate and Natural Hazards through AI and Digital Technologies
EDI
Enhancing Resilience to Climate and Natural Hazards through AI and Digital Technologies
Convener: Michele RoncoECSECS | Co-conveners: Jean-Baptiste BoveECSECS, Oscar J. Pellicer-ValeroECSECS, Kai-Hendrik CohrsECSECS, Alessia MatanoECSECS, Maria Vittoria GargiuloECSECS, Monique Kuglitsch
Orals
| Fri, 08 May, 16:15–18:00 (CEST)
 
Room 1.31/32
Posters on site
| Attendance Fri, 08 May, 14:00–15:45 (CEST) | Display Fri, 08 May, 14:00–18:00
 
Hall X3
Posters virtual
| Mon, 04 May, 14:42–15:45 (CEST)
 
vPoster spot 3, Mon, 04 May, 16:15–18:00 (CEST)
 
vPoster Discussion
Orals |
Fri, 16:15
Fri, 14:00
Mon, 14:42
Recent advances in AI and digital technologies are transforming how we assess and manage risks from climate extremes and natural hazards. LLMs, GenAI, and foundation models enable integration of diverse data sources, while XAI ensures transparency in high-stakes decision-making. Digital twins of the Earth system and human–environment interactions provide powerful platforms for simulating hazard cascades, testing adaptation options, and supporting anticipatory action. This session invites contributions on AI-driven knowledge extraction, hazard prediction, risk assessment, disaster response, and multi-hazard simulation. We particularly welcome work that explores synergies—for example, digital twins generating synthetic data for foundation models, or LLMs embedded as reasoning layers within simulation environments. By highlighting these intersections, the session aims to advance cross-disciplinary dialogue on how converging digital technologies can accelerate resilience to climate and natural hazard risks.

Orals: Fri, 8 May, 16:15–18:00 | Room 1.31/32

The oral presentations are given in a hybrid format supported by a Zoom meeting featuring on-site and virtual presentations. The button to access the Zoom meeting appears just before the time block starts.
Chairpersons: Michele Ronco, Oscar J. Pellicer-Valero, Kai-Hendrik Cohrs
16:15–16:20
16:20–16:40
|
EGU26-22994
|
solicited
|
Highlight
|
On-site presentation
Gustau Camps-Valls

As climate extremes intensify, the gap between hazard detection and effective anticipatory action remains a critical bottleneck for resilience. This talk synthesizes two perspectives works to outline a roadmap for the next generation of AI models for the analysis, modeling and understanding of extreme events, and their integration in Early Warning Systems (EWS). We first examine the role of deep learning and Explainable AI (XAI) in advacing the detection and physical understanding of extreme weather, ensuring transparency in high-stakes risk assessment. We then propose advancing towards an integrated EWS architecture, leveraging Meteorological and Geospatial foundation models to predict multi-hazard impacts. By embedding causal AI to ensure reliable reasoning and generative methods for long-term adaptation, these digital technologies may provide a robust framework for simulating hazard cascades and delivering equitable, people-centered disaster response.

How to cite: Camps-Valls, G.: Integrating AI for Climate Resilience, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-22994, https://doi.org/10.5194/egusphere-egu26-22994, 2026.

16:40–16:50
|
EGU26-5845
|
On-site presentation
Guido Pizzini, Bertrand Rukundo, and Patrice Chataigner

Despite major advances in hazard modelling, climate science, and early warning systems, disaster management decision-making remains constrained by fragmented information, time pressure, and high levels of uncertainty. While large language models (LLMs) show promise in synthesising complex information, their operational use in disaster contexts is limited by concerns around reliability, transparency, and trust. This contribution presents an AI Situation Room architecture designed to address these challenges by embedding LLMs within a structured, agentic decision-support system for disaster risk and humanitarian operations.

At the core of this architecture is AISHA, an agentic superforecaster that combines retrieval-augmented generation, probabilistic reasoning, and explicit hypothesis testing to support situational awareness, short-term risk outlooks, and scenario development. Rather than producing single narrative outputs, AISHA operates across a supervised information value chain: scanning heterogeneous data sources, structuring and triangulating evidence, generating alternative interpretations, assigning confidence levels, and making assumptions and uncertainties explicit. Human analysts remain in the loop at critical stages, ensuring contextual judgement, accountability, and quality control.

The AI Situation Room has been piloted in disaster and crisis-related settings to support rapid analysis, anticipatory action discussions, and operational prioritisation. Results indicate that agentic AI can reduce cognitive overload, improve traceability of analytical judgements, and strengthen the translation of complex risk information into actionable insights. Crucially, the approach reframes LLMs from autonomous answer-generators to analytical collaborators that augment expert reasoning under uncertainty.

This presentation contributes a practical, operationally grounded framework for the responsible adoption of LLMs and agentic AI in disaster management. By addressing transparency, governance, and trust, it demonstrates how AI Situation Rooms can help bridge the persistent gap between geoscientific risk knowledge and real-world decision-making in increasingly volatile hazard environments.

How to cite: Pizzini, G., Rukundo, B., and Chataigner, P.: From Data to Decisions: An AI Situation Room for Crisis and Disaster Management, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-5845, https://doi.org/10.5194/egusphere-egu26-5845, 2026.

16:50–17:00
|
EGU26-4010
|
On-site presentation
Sara De La Fuente, Francisco Raga, Javier Espinosa, Javier Ballester, David Echeverry, Ruben Perez Moreno, and Seliman Neikate

The SAFEPLACE project is a space-enabled innovation initiative, developed within the European Space Agency’s Civil Security from Space (CSS) programme, aimed at improving crisis management and emergency response by facilitating the operational use of Earth Observation (EO), satellite communications, and advanced digital technologies. SAFEPLACE focuses on bridging the gap between complex space assets and the practical needs of public authorities, civil protection agencies, and first responders, enabling faster, more informed, and more coordinated decision-making in emergency situations.

Within this framework, the SAFEPLACE Crisis Assistant is an AI-enabled decision-support tool developed and demonstrated in 2025 to support wildfire crisis management through the operational use of advanced artificial intelligence and space-based information services. The assistant is built on Large Language Model (LLM) technology and exploits Retrieval-Augmented Generation (RAG) techniques combined with advanced prompt engineering to deliver reliable, contextualized, and explainable information to emergency responders operating in time- and resource-critical environments.

The Crisis Assistant acts as a unified conversational interface that allows users to interact naturally with complex crisis-management services and datasets. Through dialogue, users can request situational summaries, follow the evolution of wildfire alerts, access relevant operational knowledge, and obtain tailored recommendations of Earth Observation (EO) data and space-based services. The use of RAG ensures that AI-generated responses are grounded in authoritative sources, historical records, and near-real-time data, significantly reducing uncertainty and enhancing trust in AI-assisted decision-making.

The SAFEPLACE Crisis Assistant was validated in a live operational demonstration in November 2025 during SAFEPLACE Demo 2, organized by Starion together with its partners Vodafone Business and Wireless DNA in Valencia's Emergencies Management Centre, Spain. The demonstration involved around 50 in-person participants and an additional online session attended by more than 30 users, including emergency response organizations, public institutions such as European Space Agency (ESA) and the Spanish Space Agency (AEE), and industry stakeholders. The assistant was tested using real historical wildfire events, demonstrating its ability to support realistic operational workflows through interactive AI-driven exchanges.

A core feature of the Crisis Assistant is its EO Marketplace Space Data Recommender, which enables users to identify, request, and retrieve appropriate satellite imagery and derived products directly through conversational interaction. Building on the successful 2025 demonstrations, the SAFEPLACE Crisis Assistant will be further evolved in 2026 to extend its capabilities to flood crisis management, while also introducing enhanced AI-driven functionalities for wildfires, consolidating SAFEPLACE as a scalable, multi-hazard crisis assistant for emergency management.

How to cite: De La Fuente, S., Raga, F., Espinosa, J., Ballester, J., Echeverry, D., Perez Moreno, R., and Neikate, S.: The SAFEPLACE Crisis Assistant: Bridging Space Data Services & AI for Faster Crisis & Emergency Decision Support, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-4010, https://doi.org/10.5194/egusphere-egu26-4010, 2026.

17:00–17:10
|
EGU26-18676
|
ECS
|
On-site presentation
Mounia El baz, Patrick Ebel, Junjue Wang, Weihao Xuan, Heli Qi, Zhuo Zheng, Naoto Yokoya, Junghwan Park, Jaewan Park, Arthur Elskens, Eléonore Charles, Iacopo Modica, Zachary Foltz, Philippe Bally, Christian Bossung, Marco Chini, Nicolas Longépé, and Gabriele Meoni

Earthquakes are a destructive and oftentimes unanticipated force of nature. To facilitate timely disaster relief, very high resolution spaceborne observations can map urban destruction even over remote or inaccessible terrain. Fostering community-driven innovation on AI-based solutions for rapid mapping of building-level damage, ESA Φ-lab and the International Charter ’Space and Major Disasters’ jointly organized the AI for Earthquake Response competition. The activity was designed to emulate the needs and urge of real post-event activations. In its course, over 261 teams participated on the ESA Φ-lab Challenges platform.

The main contribution of this work is to report the key setup and outcomes of the challenge as well as share with the community the winning strategies of the most competitive solutions. We will first provide an overview of recent and related work, then detail the core premises of the competition, including the two-phase structure of the challenge as well as its evaluation principles and data. We will also provide descriptions of the winning strategies of the best-performing teams, comprising details on data preparation, the data-driven modelling approach, and the respective team’s recap and discussion on their accomplishments. We will also review similarities or differences across models and distill key insights. Finally, we conclude by reviewing key findings and highlighting open challenges and opportunities for future contributions in rapid mapping for building damage assessment.

We foresee this work to foster further innovation in the community, working towards data-driven rapid mapping that may in the future support real post-seismic activations and save human lives.

How to cite: El baz, M., Ebel, P., Wang, J., Xuan, W., Qi, H., Zheng, Z., Yokoya, N., Park, J., Park, J., Elskens, A., Charles, E., Modica, I., Foltz, Z., Bally, P., Bossung, C., Chini, M., Longépé, N., and Meoni, G.: AI for Earthquake Response: Outcomes & insights from a global spaceborne rapid mapping challenge, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-18676, https://doi.org/10.5194/egusphere-egu26-18676, 2026.

17:10–17:20
|
EGU26-8444
|
On-site presentation
Hunghwan Choi, Kyounghwan Kim, Myungdal Son, and Jooyong Lee

Recent advances in artificial intelligence have led to the emergence of Physical AI, which interacts with the real world through robots and physical agents. However, conceptual definitions and system architectures for AI that perceive, interpret, and operate large-scale spatial environments—such as cities, national territories, and the Earth—have not yet been clearly established. This paper defines a new paradigm, Geo-Physical AI, which integrates digital twins and artificial intelligence to perceive, predict, and operate real-world spatial environments, and proposes a collaborative framework for its implementation.

In the proposed Geo-Physical AI architecture, the digital twin layer replicates urban and national environments at high resolution and integrates terrain, infrastructure, transportation, environmental, and social data to support real-time visualization and scenario-based simulation. The artificial intelligence layer functions as a cognitive engine that learns from spatial data to recognize urban patterns, predict future risks, and derive optimal strategies across various domains, including traffic control, disaster response, and urban safety. Through the tight integration of these two technologies, the system continuously performs sensing, analysis, simulation, and execution in the real world.

The framework consists of a three-layer collaborative structure: (1) a Digital Twin Layer responsible for spatial modeling and simulation, (2) an Artificial Intelligence Layer that performs pattern analysis, prediction, and decision optimization, and (3) an Execution Layer that connects analytical results to real-world services and policy implementation. Through application cases in transportation, disaster management, and urban safety, this study demonstrates that Geo-Physical AI enables a shift from reactive, post-event urban management to proactive, predictive, and preventive intelligent city operations.

By conceptualizing and structuring Geo-Physical AI for the first time, this research provides the theoretical and technical foundation for realizing Cognitive Digital Twins that can autonomously perceive and respond to real-world condition

How to cite: Choi, H., Kim, K., Son, M., and Lee, J.: Geo-Physical AI: A New Paradigm for Cognitive Digital Twins through the Collaboration of Large Language Models and Digital Twins, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-8444, https://doi.org/10.5194/egusphere-egu26-8444, 2026.

17:20–17:30
|
EGU26-9051
|
ECS
|
On-site presentation
Li Zeng, Luyu Ju, Limin Zhang, Zongxian Su, and Quanke Su

Managing settlement risks during the Operation and Maintenance (O&M) phase of immersed tunnels is critical for preventing structural hazards, particularly in mega-infrastructures like the Hong Kong-Zhuhai-Macau Bridge (HZMB). However, conventional risk management relies heavily on fragmented data across heterogeneous sources, manual calculations, and implicit expert knowledge. These dependencies create significant inefficiencies and susceptibility to human error, potentially compromising disaster prevention efforts. To address these challenges, this study introduces TunnelSentinel, a novel Multi-Agent System (MAS) powered by Large Language Models (LLMs) capable of executing end-to-end settlement management processes. The framework integrates three core innovations: (1) a robust multi-agent architecture (comprising Orchestrator, Retriever, Simulator, and Reporter agents) that automates collaboration for complex decision-making while ensuring process transparency; (2) a Structure-Guided Retrieval-Augmented Generation (SG-RAG) method designed to accurately extract insights from hierarchical engineering and geological project documents; and (3) an optimized model configuration strategy balancing performance with computational efficiency. Applied to the HZMB, TunnelSentinel reduced average task completion time to under 62 seconds—a 126× speed improvement over manual operations—while maintaining accuracy exceeding 97% in information retrieval, settlement calculation, and scenario planning. This work demonstrates the transformative potential of Agentic AI in geosciences, offering a scalable solution for autonomous infrastructure resilience and safety.

How to cite: Zeng, L., Ju, L., Zhang, L., Su, Z., and Su, Q.: TunnelSentinel: An Agentic AI Framework for Geo-Structural Resilience and Settlement Safety in Immersed Tunnels, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-9051, https://doi.org/10.5194/egusphere-egu26-9051, 2026.

17:30–17:40
|
EGU26-6458
|
On-site presentation
Tommaso Redaelli, Paolo Mazzoli, Valerio Luzzi, Marco Renzi, Francesca Renzi, and Stefano Bagli

Urban areas are increasingly exposed to flood risk due to climate change, land take, and ageing drainage infrastructure. Although large volumes of geospatial data, numerical models, and meteorological information are available, their operational use in disaster risk reduction (DRR) and disaster risk management (DRM) remains limited. Civil protection officers and first responders often rely on static maps or complex GIS workflows that are poorly suited for rapid, scenario-based decision-making during emergencies. A key challenge is the lack of accessible and intuitive tools capable of translating advanced flood modelling into actionable intelligence in real time.

This contribution presents SaferPlaces Agentic AI, an agentic Large Language Model (LLM)-based digital twin framework designed to democratise access to flood risk intelligence and make professional-grade flood simulations usable by non-technical stakeholders. The system is implemented within the SaferPlaces platform and operates at global scale, allowing flood risk analyses to be activated on demand for any Area of Interest (AOI) worldwide through natural language interaction.

The framework is centred on an autonomous LLM agent that interprets user intents and orchestrates heterogeneous geospatial data sources, meteorological observations and forecasts, and hydrological–hydrodynamic modelling services. Users can trigger complex workflows conversationally—such as simulating forecast-driven pluvial flood scenarios, identifying exposed critical infrastructure, or testing mitigation measures—without requiring GIS or modelling expertise. Outputs include flood extent, water depth, flow velocity, and receptor-level impact metrics, fully interoperable with standard GIS environments and enhanced through immersive 3D and virtual reality visualisation. The modular, tool-based design of the agent enables the integration of additional analytical capabilities, external services, and hazard-specific models over time, supporting future multi-hazard applications such as wildfires, heatwaves, droughts, and compound risk scenarios.

Persistent project-level memory enables iterative scenario exploration and rapid adaptation of analyses during evolving emergency situations. To ensure reliability, transparency, and trust in operational contexts, the system adopts a configurable human-in-the-loop approach, allowing users to validate assumptions and control the level of automation.

Through urban flood digital twin applications, early-warning support, and mitigation scenario testing, SaferPlaces Agentic AI demonstrates how agentic systems can bridge the gap between complex geoscientific modelling and real-world emergency decision-making. The approach supports more inclusive, scalable, and effective flood DRR and DRM, contributing to improved preparedness and resilience in a changing climate.

How to cite: Redaelli, T., Mazzoli, P., Luzzi, V., Renzi, M., Renzi, F., and Bagli, S.: SaferPlaces Agentic AI: Democratising Global Flood Risk Intelligence for Disaster Risk Reduction and Management, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-6458, https://doi.org/10.5194/egusphere-egu26-6458, 2026.

17:40–17:50
|
EGU26-560
|
ECS
|
On-site presentation
Simona Cariello, Claudia Corradino, and Ciro Del Negro

Volcanic eruptions emit large quantities of sulfur dioxide (SO₂) and thermal energy, affecting atmospheric chemistry, aerosol formation, and Earth’s radiative balance. Monitoring these emissions is crucial for understanding eruption dynamics, evaluating climatic impacts, and improving early warning systems. Satellite-based Earth observation, particularly with Sentinel-5P and its TROPOspheric Monitoring Instrument (TROPOMI), offers global coverage for detecting volcanic SO₂, but existing methods, often based on thresholding, tend to lack robustness, especially when models must generalize across diverse volcanic contexts.

Here, we introduce a zero-shot scene-segmentation approach for volcanic plume recognition based on the Segment Anything Model 2 (SAM2), a vision Foundation Model (FM) pretrained on large-scale visual dataset. Without any task-specific retraining, SAM2 accurately segments volcanic SO₂ plumes in Sentinel-5P SO₂ images.  A dedicated prompting procedure is adopted to drive the object recognition process.

The method shows strong performance not only for eruptions with compact, well-isolated SO₂ plumes, such as Mount Etna and Shishaldin, but also in events where the plume disperses over several hundred kilometres, as observed for the Hunga Tonga eruption. Preliminary evaluations indicate performance competitive with, and in some cases exceeding, conventional approaches, while maintaining near-real-time processing capability and avoiding the use of large labeled datasets.    

These results demonstrate the potential of general-purpose vision foundation models for scalable, automated analysis of volcanic emissions, highlighting their relevance for operational monitoring systems and pointing toward broader applications of Foundation Models in Earth observation.

How to cite: Cariello, S., Corradino, C., and Del Negro, C.: Advancing Volcanic SO2 Plume monitoring with a Zero-Shot segmentation approach using Sentinel 5P Tropomi and the SAM2 Foundation Model, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-560, https://doi.org/10.5194/egusphere-egu26-560, 2026.

17:50–18:00
|
EGU26-8738
|
ECS
|
On-site presentation
Anamitra Saha and Sai Ravela

Extratropical cyclones (ETCs) dominate mid-latitude wind hazards, yet their risk remains poorly quantified. Unlike tropical cyclones, ETCs lack scalable, physics-based downscaling methods because of their multiscale, asymmetric structure. As a result, probabilistic ETC risk assessment relies on computationally intensive numerical weather prediction models, limiting ensemble size and constraining estimates of extreme risk.

Here we introduce a data-driven generative downscaling framework that maps coarse-resolution reanalysis wind fields (ERA5, 25 km) to convection-permitting resolution (WRF, 4 km), resolving mesoscale structures essential for hazard and loss modeling. Across a broad range of ETC events, when applied to near-surface winds for flooding and energy application, the downscaled fields reproduce spatial organization, extremes, and kinetic-energy spectra consistent with high-resolution WRF simulations, while reducing computational cost by orders of magnitude. A key element of this success is to couple statistical inference with generative ML models, which ameliorates the data paucity issues for rare events. 

To extend beyond the historical record, we couple this downscaling model with a data-driven sampling and propagation model, which enables large ensembles of physically plausible high-resolution scenarios. This combined framework substantially improves estimation of tail risks, resolving well beyond the training data, that are inaccessible to observations and impractical to sample using conventional numerical models.

How to cite: Saha, A. and Ravela, S.: Learning Synthetic Extratropical Cyclone Models for Climate Extreme Risk Assessments Using Generative Models, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-8738, https://doi.org/10.5194/egusphere-egu26-8738, 2026.

Posters on site: Fri, 8 May, 14:00–15:45 | Hall X3

The posters scheduled for on-site presentation are only visible in the poster hall in Vienna. If authors uploaded their presentation files, these files are linked from the abstracts below.
Display time: Fri, 8 May, 14:00–18:00
Chairpersons: Jean-Baptiste Bove, Alessia Matano, Maria Vittoria Gargiulo
X3.28
|
EGU26-5722
Stefano Natali, Edoardo Kimani Bellotto, Wassim El Azami El Adli, Florian Widmer, and Joost Van Bemmelen

The combined use of satellite data, model output, and other geospatial information layers requires a wide set of multidisciplinary skills that are often hard to find all together among scientists who are more educated in e.g. understanding, managing, or responding to natural and climate change-connected events. On these premises, there is a need to develop tools that enable non-specialists to access and exploit the increasing capabilities emerging from the fusion of different Earth Observation (EO)-based and other geospatial data.

In response to this, the TheDe project aims to create a new type of data dissemination service that enables the automatic generation of thematic datacubes on demand. It integrates Earth Observation (e.g., Copernicus products) and other geospatial environmental data with Large Language Models (LLMs) and semantic interpretation, transforming diverse datasets into accessible, meaningful information for both domain experts and a broader audience.

 

TheDe acts as an AI assistant that, through a chatbot interface, receives a human language query related to a specific EO task and provides the corresponding data, metadata, and descriptions, ready for download in user-specified formats. Specifically, the query is processed by a tailored LLM framework that transforms human language into complex geospatial queries, mapping high-level EO tasks into concrete data requests. The system then identifies the relevant geospatial datasets and calls the appropriate APIs (e.g., Copernicus CDSE/CDS/ADS, NASA FIRMS, ESA Open Access Hub, etc.). Once the datasets are obtained, the LLM uses the metadata to generate context-rich descriptions that offer practical guidance to the user which are delivered together with the corresponding datasets.

During the system architecture design, a detailed study of the state of the art was conducted, focusing on evaluating the performance of open-source LLMs for EO reasoning through dedicated benchmarks. In parallel, different system architectures were explored, with particular attention to agentic frameworks. Specific techniques such as Retrieval-Augmented Generation (RAG), fine-tuning, and prompt engineering were analysed to enhance the specialization of the various components. Therefore, on top of these studies, an innovative model is proposed for EO data discovery and exploitation.

 

The preliminary outcomes show promising alignment with current sector needs and developments. TheDe introduces the capability to access not only widely used EO data but also their combination with other heterogeneous data sources, facilitating interoperability and scalability.

Finally, TheDe aims to bridge the gap between data systems to support advanced data mining activities beyond traditional Earth Observation services. For this reason, new types of use-cases are proposed representing innovative EO applications that, in the long term, can leverage the potentials of TheDe.

How to cite: Natali, S., Bellotto, E. K., El Azami El Adli, W., Widmer, F., and Van Bemmelen, J.: Thematic DataCubes on-Demand (TheDe): Leveraging Large Language Models (LLMs) for Earth Observation Data Discovery and Exploitation, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-5722, https://doi.org/10.5194/egusphere-egu26-5722, 2026.

X3.29
|
EGU26-14350
|
ECS
Jacopo Grassi, Dmitrii Pantiukhin, Ivan Kuznetsov, Nikolay Koldunov, Massimo Dragan, and Jost von Hardenberg

Large Language Models (LLMs) and agentic AI are increasingly explored as interfaces for geoscience information, risk communication, and decision support in natural hazards and disaster management. However, most LLM-based assistants remain limited in quantitative reasoning and often lack traceability, reproducibility, and robust uncertainty communication. Here we present XCLIM-AI, an agentic system that couples LLM-based interpretation with deterministic computation of climate indicators through the open-source xclim library. XCLIM-AI can compute >200 standardized climate indices from CMIP6 HighResMIP projection ensembles, enabling responses that combine narrative explanations with transparent, auditable quantitative outputs (e.g., heatwave metrics, drought duration, extreme precipitation indices) and explicit provenance of assumptions and processing steps.

A key aspect of this work is the integration of XCLIM-AI within ClimSight, a multi-agent platform for localized climate information.  In the integrated architecture, general-purpose agents handle retrieval and reasoning over scientific and contextual information, while XCLIM-AI performs on-demand, tool-based computation of indicators requested by the user query.

We evaluate four system configurations: (1) a plain LLM baseline, (2) XCLIM-AI, (3) ClimSight, and (4) an integrated ClimSight–XCLIM architecture, using a hybrid assessment protocol that combines scalable LLM-as-a-judge scoring with blinded human expert evaluation. Performance is assessed across four criteria central to climate- and hazard-relevant services: relevance, credibility, uncertainty communication, and actionability. Results show systematic gains over the baseline, with the strongest improvements in actionability and uncertainty reporting when indicator computation is available and properly integrated. We also observe that simply increasing contextual information does not automatically increase perceived credibility, highlighting the importance of traceable quantitative evidence and evaluation protocols tailored to operational trust. We conclude by discussing implications for the reliable adoption of agentic AI in geosciences and hazard-facing workflows, and by outlining a generalizable evaluation framework for tool-augmented LLM systems.

How to cite: Grassi, J., Pantiukhin, D., Kuznetsov, I., Koldunov, N., Dragan, M., and von Hardenberg, J.: Augmenting Large Language Models with Climate Indicator Computation for Next-Generation Climate Services, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-14350, https://doi.org/10.5194/egusphere-egu26-14350, 2026.

X3.30
|
EGU26-1400
|
ECS
Lichun Wu

Crisis maps and drone imagery are widely produced during humanitarian emergencies, yet their interpretation requires expertise and time - resources that are scarce during response operations. This work presents PromptAid-Vision, a web-based platform that integrates vision–language models (VLMs) to support rapid interpretation of crisis maps and disaster imagery for emergency decision making.

The prototype includes four core functions: image upload, dataset exploration, analytics visualization, and an administration dashboard. It is designed to streamline expert data collection, evaluate VLM performance for humanitarian image interpretation, and enable future model fine-tuning. Experts can upload crisis images and receive VLM-generated descriptions, analyses, and recommended actions. They may edit these outputs, providing high-quality image-text pairs for future training. A built-in survey allows users to score VLM responses across three dimensions - accuracy, context, and usability.

The system currently integrates a range of commercially available VLMs and presents all collected data, user interactions, and model performance metrics through an analytics dashboard. An administrative interface supports model configuration and system-prompt management.

The work contributes: (1) the creation of an expert-reviewed dataset of crisis image-interpretation pairs, and (2) an evaluation framework for assessing VLM performance in humanitarian contexts. Next steps include public deployment for large-scale data collection and fine-tuning of VLMs for crisis-mapping applications.

How to cite: Wu, L.: PromptAid Vision: AI-Assisted Crisis Image Interpretation Performance Evaluation and Expert-Reviewed Data Collection Platform, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1400, https://doi.org/10.5194/egusphere-egu26-1400, 2026.

X3.31
|
EGU26-19288
|
ECS
Ali Gholami, Marlon Vieira Passos, Amir Rezvani, Jonas Althage, and Zahra Kalantari

Climate change and urbanization have increased the frequency and severity of flood events in many cities, highlighting vulnerabilities from the interdependence of critical systems in urban environments. In highly connected cities, flood impacts rarely remain localized but instead propagate across infrastructures, services, and social systems, generating cascading effects that amplify societal, economic, and environmental consequences. Despite growing recognition of compound and cascading risks, most flood risk studies continue to focus on direct impacts or single-sector analyses, with limited capacity to capture how flood-triggered disruptions evolve and interact across interconnected systems in space and time.

To address this gap, this study develops an integrated framework and a web-based tool for analysing flood-driven cascading risks, demonstrated through Stockholm city. Long-term and real-time flood risk maps provide geospatial hazard inputs that trigger infrastructure failure propagation across water, electricity, and transport systems.

Applying this framework helps identifying spatial patterns of cascading flood impacts, revealing hotspots where interconnected systems exhibit heightened vulnerability to extreme events. These impacts demonstrate how indirect effects can dominate overall risk, often exceeding direct flood damages. By making complex cascade dynamics transparent and explorable, this approach supports improved situational awareness, facilitates cross-sectoral dialogue, and enhances decision-making for flood risk management and adaptation planning. The framework contributes to advancing the assessment of cascading risks from extreme hydrological events and provides a foundation for more resilient and integrated approaches to managing flood impacts in a changing climate.

How to cite: Gholami, A., Vieira Passos, M., Rezvani, A., Althage, J., and Kalantari, Z.: Managing Cascading Impacts on Critical Infrastructure in Stockholm, Sweden, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-19288, https://doi.org/10.5194/egusphere-egu26-19288, 2026.

X3.32
|
EGU26-3942
|
ECS
Kecheng Lu and Ruidong Li
Urban transportation resilience is increasingly threatened by the complex spatiotemporal dynamics of rainfall, which trigger cascading disruptions through pluvial flooding-induced network perturbations. However, the resulting impact patterns of rainfalls on transportation network performance remain ill understood, underscoring the need for systematic assessment across the full spectrum of rainfall conditions. Thus, this work integrates high-resolution pluvial flood modeling with microscopic traffic simulation to investigate traffic performance degradation in the Beijing Municipal Administrative Center across 22-year high-resolution rainfall scenarios. SHapley Additive exPlanations (SHAP) are utilized to attribute variations in network performance to specific spatiotemporal rainfall characteristics, identifying the dominant drivers of traffic congestion. Building on these mechanistic insights from the full-spectrum series, we systematically reveal the critical thresholds that trigger undesirable transitions from stability to failure. These thresholds serve as a vital scientific reference for the development of impact-based early warning systems, facilitating proactive disaster mitigation and enhancing urban resilience.

How to cite: Lu, K. and Li, R.: Attributing Urban Traffic Performance Loss to Spatiotemporal Storm Patterns: A Full-Spectrum Analysis across a 22-Year Rainfall Record, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3942, https://doi.org/10.5194/egusphere-egu26-3942, 2026.

X3.34
|
EGU26-15632
|
ECS
Yunfan Zhang, Luyu Ju, and Limin Zhang

Landslides are among the most destructive geological hazards, requiring rapid, accurate, and comprehensive risk assessment to minimize loss of life and property. Traditional management systems often struggle to integrate heterogeneous data sources—such as real-time environmental metrics and unstructured historical records—resulting in delayed decision-making. To address this challenge, this paper proposes a novel multi-agent system framework designed for automated landslide risk management and emergency response. The proposed framework orchestrates three specialized agents to achieve a holistic understanding of disaster risks. The first agent, the Data Processing Agent, is responsible for the real-time acquisition of IoT data, specifically rainfall intensity. It utilizes embedded AI algorithms to process this time-series data and compute instantaneous landslide probability. The second agent, the Contextual Retrieval Agent, leverages Retrieval-Augmented Generation (RAG) technology. It retrieves and synthesizes relevant historical landslide documentation and multi-modal geological reports, providing a qualitative context to the quantitative data. The third agent, the Decision and Planning Agent, functions as the central reasoning unit. It fuses the probabilistic outputs from the first agent and the historical context provided by the second agent. Based on this multi-modal synthesis, the agent determines the current disaster risk level and automatically generates targeted evacuation plans for residents in affected areas. Experimental validation demonstrates the efficacy of this multi-modal framework in complex disaster scenarios. The system achieved a 30% improvement in response speed compared to traditional methods. Furthermore, the framework successfully realized a fully automated workflow from data acquisition to strategic planning, significantly enhancing the reliability and timeliness of landslide disaster management.

How to cite: Zhang, Y., Ju, L., and Zhang, L.: A Multimodal Multi-Agent Framework for Automated Landslide Risk Management, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-15632, https://doi.org/10.5194/egusphere-egu26-15632, 2026.

X3.35
|
EGU26-3725
|
ECS
Sen Yang, Yi Zhang, and Chen Gu

Accurate simulation of crowd evacuation processes is essential for evaluating the safety and resilience of community during disaster emergencies. Conventional agent-based evacuation models effectively capture individual movement and interactions but often rely on predefined behavioral rules, limiting their ability to represent adaptive reasoning, information exchange, and context-dependent decision-making in rapidly changing environments. This study presents an agent-based evacuation simulation framework in which large language models (LLMs) are embedded as the decision-making components of individual agents. Each agent maintains internal states, including personality attributes, environmental perceptions, and decision histories, while the LLM enables adaptive reasoning and communication based on evolving situational context. To ensure scalability for large populations, batch prompting and parallel computation strategies are adopted to mitigate the computational cost introduced by LLM integration. The framework supports both pedestrian and vehicular agents, allowing multimodal evacuation dynamics to be examined within a unified simulation environment. A real-world disaster evacuation scenario is used to evaluate the proposed approach. Results indicate that LLM-enhanced agents exhibit more flexible, context-aware, and realistic behavioral patterns compared with traditional rule-based models. The proposed framework reduces dependence on manually specified behavioral assumptions and provides a scalable foundation for probabilistic evacuation performance assessment and strategy evaluation under diverse hazard conditions.

How to cite: Yang, S., Zhang, Y., and Gu, C.: Large Language Model–Enhanced Agent-Based Modeling for Intelligent Crowd Evacuation under Disaster Scenarios, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3725, https://doi.org/10.5194/egusphere-egu26-3725, 2026.

X3.36
|
EGU26-6856
|
ECS
Zongkui Guan, Daan Buekenhout, Daniel Eduardo Villarreal Jaime, Lukas Sterckx, Ricardo Reinoso-Rondinel, and Patrick Willems

Urban flood modelling faces significant challenges when applied to rainfall events for which observational data are scarce, thereby limiting the reliability of flood forecasts under unseen conditions. Enhancing model transferability is therefore essential for effective flood hazard assessment and emergency response, yet this issue remains insufficiently addressed in current urban flood research. Recent advances in machine learning offer promising opportunities to improve flood model transferability while preserving computational efficiency and interpretability. In particular, ensemble-based methods such as Random Forest (RF) models demonstrate robust performance with limited training data and provide valuable insights into model behaviour.

This study presents a simple and interpretable RF-based framework for transferable urban flood simulation, developed for the city of Antwerp. The model is trained using spatial inundation depth data generated by a detailed hydrodynamic model, relying on a limited set of input variables, including digital elevation, land cover, and radar rainfall information. Training is performed on one historical rainfall event and evaluated on an independent event to assess transferability. To improve adaptation to unseen rainfall conditions, spatial fine-tuning is applied using only 10% of the flood impact data from the target event.

The proposed framework achieves strong predictive skill, with Nash–Sutcliffe efficiency values exceeding 0.77 and Kling–Gupta efficiency above 0.87, while enabling rapid predictions over large urban domains. Comparative analyses further show that the RF-based approach consistently outperforms alternative machine learning models under both transfer and uncertainty scenarios.

Overall, this study demonstrates that a classic RF model can deliver an efficient, transferable, and interpretable solution for rapid urban flood simulation, supporting improved flood risk management and emergency decision-making.

How to cite: Guan, Z., Buekenhout, D., Villarreal Jaime, D. E., Sterckx, L., Reinoso-Rondinel, R., and Willems, P.: A Simple and Interpretable Random Forest Framework for Transferable Rapid Urban Flood Simulation, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-6856, https://doi.org/10.5194/egusphere-egu26-6856, 2026.

X3.37
|
EGU26-3721
|
ECS
Zheng-Da Jiang and Yuan-Chien Lin

With the intensification of climate change, extreme rainfall events have become more frequent, increasing the risks of urban flooding and river overflow. As a result, real-time water level monitoring has become essential for disaster prevention and water resources management. Conventional monitoring methods mainly rely on water gauges and sensors, which are costly to install and maintain and are often constrained by environmental and terrain conditions. Moreover, most image-based approaches require calibrated staff gauges as reference objects, limiting their flexibility in practical applications.

This study proposes a daytime water level monitoring approach that integrates existing CCTV systems with deep learning techniques. Instance segmentation models based on Mask R-CNN and YOLOv11 are employed to automatically extract water regions from images, and their performance is evaluated in terms of mask quality and inference efficiency. Vertical pixel variations at selected locations within the segmented water regions are further analyzed to estimate water level changes. The results indicate that the proposed method can effectively capture daytime water level variation trends, offering advantages such as low cost, non-contact measurement, and high scalability for multi-station real-time monitoring.

 

Keywords: Deep learning, image detection, water level monitoring, CCTV

How to cite: Jiang, Z.-D. and Lin, Y.-C.: Development of Real-Time Water Level Detection Technique by CCTV and Deep Learning, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3721, https://doi.org/10.5194/egusphere-egu26-3721, 2026.

X3.38
|
EGU26-21127
Jasmin Lampert, Phillipp Fanta-Jende, Pascal Thiele, Lorenzo Beltrame, Jules Salzinger, Adrián Di Paolo, Ignacio Masari, Felix Geremus, Albin Bjärhall, Benjamin Schumacher, and Diogo Duarte

The EMERALD project addresses critical challenges in enhancing forest resilience to climate-driven natural hazards, with a particular focus on the timely detection and monitoring of forest disturbances such as e.g. windthrows. These disturbances are increasingly amplified by climate extremes and pose substantial ecological and economic risks, including biodiversity loss, carbon stock degradation, and cascading impacts on ecosystem services. Despite the growing availability of Earth observation (EO) data, operational forest monitoring remains constrained by cloud cover, terrain-induced shadows, and limited spatial resolution, reducing the reliability of hazard assessment and early response.
To overcome these limitations, EMERALD extends SAFIR’s de-clouding and de-shadowing core capabilities and introduces super-resolution methods to enhance the spatial resolution of Sentinel-2 data.  More specifically, EMERALD introduces a latent super-resolution approach, in which high-resolution representations are not generated as an end product but as intermediate feature states optimized for downstream hazard-relevant tasks, such as forest disturbance detection, tree species discrimination, and health assessment. The super-resolution component is therefore task-supervised, coupling image reconstruction objectives with performance metrics from downstream applications to ensure that enhanced spatial detail directly translates into improved hazard assessment capability rather than purely visual fidelity.
A third core component of EMERALD is the rigorous validation of AI-derived products using high-quality image pairs combining Sentinel-2 observations with very high-resolution Uncrewed Aerial Vehicles (UAV) data for validation purposes. These paired datasets enable quantitative assessment of reconstruction fidelity, uncertainty, and disturbance detectability across spatial scales, strengthening confidence in AI outputs for decision makers. By leveraging datasets from diverse European forest landscapes ranging from Austria to Portugal, EMERALD explicitly addresses geographic transferability and bias, a critical requirement for continental-scale hazard and resilience monitoring.
By improving the accuracy, timeliness, and transparency of forest disturbance detection, EMERALD supports AI-enabled decision-making for forest managers and policymakers, demonstrating how advanced digital technologies can enhance resilience to climate-driven natural hazards.

How to cite: Lampert, J., Fanta-Jende, P., Thiele, P., Beltrame, L., Salzinger, J., Di Paolo, A., Masari, I., Geremus, F., Bjärhall, A., Schumacher, B., and Duarte, D.: Establishing an Earth observation Super-resolution and Validation Framework for Improved Climate Hazard Assessment and Response in Forestry, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-21127, https://doi.org/10.5194/egusphere-egu26-21127, 2026.

X3.39
|
EGU26-5249
|
ECS
Antonello Squintu, Mehri Hashemi Devin, Angela Andrigo, Alessandro Tosoni, Eman Shaker, Elena Xoplaki, Alvise Papa, and Enrico Scoccimarro

The city of Venice (Italy) is highly vulnerable to weather-driven Sea Level Height (SLH) surge, which causes serious disruptions to city services (e.g. water-ambulances and water-buses) and damage to commercial and cultural assets. Similarly, due to its orography, Alexandria (Egypt) suffers from coastal floods, which heavily affect infrastructure. Early detection of these events is of paramount importance to increase the preparedness of citizens and stakeholders and to optimize the organization of major events. The increased frequency and intensity of High Water events are linked to the rise in average global SLH and to  the combination of astronomical tide and weather-driven SLH surge. While the first two components can be accurately determined via observations and astronomical calculations, the meteorological contribution requires weather forecasts as inputs. The MedEWSa project aims to improve the Early Warning Systems (EWS) of the two case studies by enhancing the forecasts of weather-driven SLH anomalies employing AI algorithms. This work began with the use of the evolutionary algorithm PCRO-SL (Probabilistic Coral Reef with Substrate Layers) on ERA5 reanalysis data to detect, among a set of candidates in the Euro-Mediterranean domain, the relevant lagged drivers of SLH anomaly. These drivers were used to train multiple Neural Networks and Tree-Based models, with in-situ observations as target series. The algorithms were fine-tuned and evaluated with the objective of identifying the most suitable one. The selected model has been implemented for daily application to the latest issued forecasts, providing the Venice Municipality Control Room with predictions of SLH extended to the sub-seasonal time horizon. These forecasts are currently being compared with the output of the standing system, assessing the added value and the improved capability of the EWS. Concurrently, the experience gained from the Venetian case has been transferred to the Egyptian case, allowing the initialization of a SLH EWS and increasing the preparedness of the city of Alexandria to coastal floods.

How to cite: Squintu, A., Hashemi Devin, M., Andrigo, A., Tosoni, A., Shaker, E., Xoplaki, E., Papa, A., and Scoccimarro, E.: Improving Sea Level Height warnings in Venice (Italy) and Alexandria (Egypt) with hybrid sub-seasonal forecasts, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-5249, https://doi.org/10.5194/egusphere-egu26-5249, 2026.

Posters virtual: Mon, 4 May, 14:00–18:00 | vPoster spot 3

The posters scheduled for virtual presentation are given in a hybrid format for on-site presentation, followed by virtual discussions on Zoom. Attendees are asked to meet the authors during the scheduled presentation & discussion time for live video chats; onsite attendees are invited to visit the virtual poster sessions at the vPoster spots (equal to PICO spots). If authors uploaded their presentation files, these files are also linked from the abstracts below. The button to access the Zoom meeting appears just before the time block starts.
Discussion time: Mon, 4 May, 16:15–18:00
Display time: Mon, 4 May, 14:00–18:00
Chairpersons: Kasra Rafiezadeh Shahi, Ioanna Triantafyllou

EGU26-6783 | Posters virtual | VPS12

AI-Powered Digital Twin Framework for Windstorm Emergency Management in Interconnected Critical Infrastructures 

Balaji Venkateswaran Venkatasubramanian, Christos Laoudias, and Mathaios Panteli
Mon, 04 May, 14:42–14:45 (CEST)   vPoster spot 3

Extreme windstorms pose significant risks to interconnected critical infrastructures such as power, transportation, and telecommunication systems. Wind-induced damage to physical assets, including overhead lines and roadside vegetation, can trigger cascading failures across interdependent networks, leading to widespread service disruptions and societal impacts. Anticipating these cascading effects under uncertain and evolving windstorm conditions remains a major challenge for emergency and crisis management.

An AI-powered Digital Twin (DT) framework for windstorm emergency management is introduced in this presentation, focusing on interconnected critical infrastructures exposed to extreme wind hazards. The framework integrates physics-based windstorm simulation with cascading impact analysis within a unified digital environment, enabling systematic assessment of the interconnected infrastructure performance across a wide range of plausible windstorm scenarios. Rather than relying solely on historical events, physically informed models are used to generate synthetic windstorm scenarios that support preparedness planning and stress-testing under future extreme conditions.

Building on ensembles of simulated windstorm scenarios, the framework can incorporate Generative AI (GenAI) techniques as a post-simulation analytical layer for vulnerability and risk analysis. GenAI operates on the outputs of physics-based simulations, learning asset-level and system-level operational behaviors and vulnerability patterns from simulated impacts, rather than replacing the underlying hazard or infrastructure models. In this role, GenAI captures complex and nonlinear relationships between wind event characteristics and cascading infrastructure failures, enabling efficient synthesis and generalization across large scenario ensembles. This hybrid physics–AI approach supports rapid and accurate identification of vulnerable assets across interconnected infrastructures, spatial hotspots of risk, and conditions that may lead to cascading disruptions under future windstorm scenarios, while preserving the physical consistency of the Digital Twin.

The applicability of the proposed framework is demonstrated through representative case studies involving national-scale interconnected power, telecommunication, and transportation infrastructures in Cyprus, serving as an example implementation. The results illustrate how the AI-powered Digital Twin can support emergency and crisis management at a national level by enabling stress-testing of infrastructure systems, identification of highly vulnerable and critical assets in the Cyprus interconnected infrastructure, improving situational awareness on critical wind-induced cascading risks, and informing response and recovery strategies under severe windstorm conditions.

Overall, this work highlights the potential of hybrid physics-based and AI-enhanced Digital Twins as decision-support tools for windstorm emergency management in interconnected critical infrastructures, providing a flexible and extensible foundation for improving resilience to climate-driven hazards.

How to cite: Venkatasubramanian, B. V., Laoudias, C., and Panteli, M.: AI-Powered Digital Twin Framework for Windstorm Emergency Management in Interconnected Critical Infrastructures, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-6783, https://doi.org/10.5194/egusphere-egu26-6783, 2026.

Please check your login data.