NH6.10 | Remote Sensing and Explainable AI for Hazard Assessment and Real-Time, Large-Scale Disaster Monitoring
EDI
Remote Sensing and Explainable AI for Hazard Assessment and Real-Time, Large-Scale Disaster Monitoring
Convener: Paraskevas Tsangaratos | Co-conveners: Nina Merkle, Raffaele Albano, Yao SunECSECS, Wei Chen, Ioanna Ilia
Orals
| Tue, 05 May, 10:45–12:30 (CEST)
 
Room 1.14
Posters on site
| Attendance Tue, 05 May, 14:00–15:45 (CEST) | Display Tue, 05 May, 14:00–18:00
 
Hall X3
Orals |
Tue, 10:45
Tue, 14:00
In crisis situations, decision-makers rely on timely, accurate, and trustworthy information about hazard extent, exposed assets, and potential impacts to guide response actions and reduce risk. Recent advances in satellite, airborne, and UAV remote sensing—combined with ground-based sensors and IoT—now make near-real-time monitoring possible at regional to global scales, even in highly vulnerable areas. At the same time, AI and machine learning are accelerating the conversion of these data streams into actionable insights. However, key challenges remain, including scalability, robustness across diverse conditions, uncertainty quantification, and transparency in model behavior.

This session invites contributions that integrate multi-sensor observations with AI—particularly explainable and interpretable methods—to support hazard detection, damage and impact assessment, forecasting, and susceptibility/hazard/risk mapping. Relevant topics include rapid mapping and alert systems, multi-platform data fusion, UAV-enabled monitoring, benchmark datasets and standards, and best practices for training, evaluation, and trustworthy deployment in operational and crisis settings.

Orals: Tue, 5 May, 10:45–12:30 | Room 1.14

The oral presentations are given in a hybrid format supported by a Zoom meeting featuring on-site and virtual presentations. The button to access the Zoom meeting appears just before the time block starts.
Chairpersons: Paraskevas Tsangaratos, Nina Merkle, Raffaele Albano
10:45–10:50
10:50–11:00
|
EGU26-3096
|
ECS
|
On-site presentation
Govinda Anantha Padmanabha and Konstantinos Karapiperis

Geophysical mass movements such as landslides and snow avalanches represent major natural hazards, particularly in mountainous regions like the European Alps. Their dynamics arise from heterogeneous material compositions interacting with complex topography, rendering reliable prediction extremely challenging. Although remote sensing techniques provide detailed measurements of terrain shape and ground motion, these observations alone cannot predict mass movements. High-fidelity numerical approaches, such as the Material Point Method (MPM), offer valuable mechanistic insight but are too computationally demanding for real-time or large-scale forecasting. This work introduces a three-dimensional geometric foundation model designed to efficiently learn and predict the spatiotemporal evolution of mass movement events. The framework is trained on high-fidelity MPM simulations validated against high-resolution remote sensing data to construct a dataset spanning diverse topographies and flow behaviours. Leveraging recent advances in operator-based neural networks and Transformer architectures, the model learns geometric and physical attributes directly on three-dimensional manifolds, enabling resolution-invariant prediction and generalization across heterogeneous terrains. The resulting surrogate model rapidly predicts the full evolution of topography, capturing key features such as flow trajectories, runout, and deposition patterns while significantly reducing computational cost compared to conventional high-fidelity numerical solvers. This efficiency allows extensive scenario exploration and broad spatial coverage, making the approach suitable for operational hazard-assessment pipelines and future digital-twin environments. In summary, the proposed framework offers a fast and robust tool for modeling geophysical mass movements, with the potential to significantly enhance large-scale hazard analysis and support next-generation monitoring systems.

How to cite: Anantha Padmanabha, G. and Karapiperis, K.: Data-Driven Forecasting of Geophysical Mass Movements, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3096, https://doi.org/10.5194/egusphere-egu26-3096, 2026.

11:00–11:10
|
EGU26-3350
|
On-site presentation
Johanna Wahbe, Jascha Muller, Rania Sahnoun, Kim Feuerbacher, Lukas Liesenhoff, Martin Langer, and Julia Gottfriedsen

Short-term fire hazard forecasting is a critical component of wildfire preparedness, yet widely used operational indices such as the Fire Weather Index (FWI) primarily represent meteorological fire danger and do not explicitly model ignition likelihood. We present a two-step, data-driven fire hazard modelling approach that combines machine learning with expert-based refinement. In the first step, a machine learning model learns the relationship between environmental fire drivers and observed wildfire ignitions to generate probabilistic fire hazard maps at a coarse spatial scale. In the second step, these base-level hazard maps are upsampled to 1 km resolution using an expert system that incorporates high-resolution susceptibility information, enabling operationally relevant fire hazard forecasts.

The machine learning component is trained on OroraTech’s proprietary six-year global active wildfire dataset, which provides a best-in-class trade-off between spatial resolution and revisit frequency. This dataset enables robust learning of ignition-relevant patterns across diverse fire regimes. Input features combine environmental variables derived from climate reanalysis, remote sensing products such as digital elevation models, and large-scale spatio-temporal dynamics capturing seasonal and regional fire behaviour. The model integrates spatial and temporal information to produce fire hazard estimates at 0.1° spatial resolution.

To support operational use, the hazard estimates are refined to 1 km spatial resolution using an expert system that applies susceptibility masks derived from aggregated vegetation indicators, infrastructure information, and additional static and dynamic constraints. This allows the generation of high-resolution fire hazard maps with lead times of up to one week.

Across the study regions, the proposed model correctly predicts up to 30 times more fire ignitions than the Fire Weather Index under comparable conditions. The model is currently being rolled out for selected users within OroraTech’s wildfire solution platform to support short-term preparedness and operational planning.

How to cite: Wahbe, J., Muller, J., Sahnoun, R., Feuerbacher, K., Liesenhoff, L., Langer, M., and Gottfriedsen, J.: Forecasting Wildfire Ignitions: A Two-Step Machine Learning and Expert-Based Fire Hazard Model, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3350, https://doi.org/10.5194/egusphere-egu26-3350, 2026.

11:10–11:20
|
EGU26-3576
|
ECS
|
On-site presentation
Ali Pourzangbar, Preethi Lakshmipathy, Siao Sun, and Mário J. Franca

Floods are among the most disruptive hazards in Mediterranean regions, and their severity is likely to intensify under climate change. Conventional flood assessments often emphasize either inundation extent or occurrence, overlooking how spatial footprint, duration, and intensity interact to shape impacts. To bridge this gap, this study integrates these three dimensions, derived from satellite, reanalysis, hydrological, and environmental datasets into a unified severity metric.

The modeling framework employs a machine learning approach, trained on a dataset comprising more than 7,500 Flood Severity Index (FSI) observations, derived from 14 documented flood events (2015–2024) detected using Sentinel-1 SAR imagery across 542 municipalities in the Valencian Community, Spain. The dataset was constructed by pairing each flood event with each municipality, so that each observation represents one municipality during one specific flood event. The output variable is the FSI, while input predictors were drawn from topographical, environmental, and hydrological data sources and were harmonized to municipal boundaries. Following preprocessing and multicollinearity screening, the refined dataset was normalized and partitioned into 70% for training and 30% for independent testing. Model performance was evaluated using cross-validation and standard error metrics.

A stacked ensemble combining Gradient Boosting and a multilayer perceptron achieved the best performance, outperforming Random Forest, SVR, and standalone neural networks. The model effectively captured nonlinear relationships, spatial heterogeneity, and the underlying structure of the observed data. It accurately predicted municipal FSI values, including statistically identified clusters of municipalities with exceptionally high FSI compared to others. Model explainability analyses showed that topography (elevation and slope), land use, and vegetation (NDVI) are the primary drivers of flood severity, with vegetated and permeable landscapes mitigating impacts by promoting water infiltration.

The calibrated model was applied to estimate future flood severity under various RCP (RCP2.6 and RCP8.5) scenarios. The projections reveal that most municipalities are expected to maintain their current severity class, while a smaller but notable subset is projected to experience an upward shift. Only a limited fraction shows indications of reduced severity. Overall, the results indicate a regional shift toward higher severity classes and highlight locations where climate-driven pressures on flood risk are likely to increase. These results demonstrate that herein developed machine-learning framework provide a decision-support tool for municipal authorities, enabling prioritization of investments in flood mitigation and climate adaptation.

How to cite: Pourzangbar, A., Lakshmipathy, P., Sun, S., and J. Franca, M.: An explainable machine-learning framework for mapping municipal flood severity: a case study in the Valencian community, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3576, https://doi.org/10.5194/egusphere-egu26-3576, 2026.

11:20–11:30
|
EGU26-4014
|
On-site presentation
Juan Ardila, Annett Anders, Kat Jensen, Henri Riihimäki, and Dafni Sidiropoulou-Velidou

Floods are among the most widespread and destructive natural hazards globally. They cause loss of life, damage buildings and critical infrastructure, disrupt transportation and supply chains, and impact agricultural productivity, with cascading consequences for food and water security. Near real time flood information is essential for emergency response and coordination of relief operations, while retrospective flood observations are needed by governments, humanitarian organizations, and the insurance sector to evaluate event severity, quantify damages, and improve preparedness and risk reduction. Despite major progress in flood remote sensing, a persistent limitation is that satellite-based flood products often provide an opportunistic view of inundation that does not coincide with maximum impacts. Peak flood conditions are transient, spatially heterogeneous in timing, and frequently asynchronous across a single event, making them unlikely to be observed in any single image acquisition.

Earth-observing satellite missions provide broad spatial coverage, but publicly available systems typically undersample flood evolution due to revisit constraints and the availability of usable observations. Optical missions can be severely constrained by clouds and precipitation during storms, while single-platform SAR missions, though all-weather, can still have multi-day revisit times depending on acquisition planning and orbit geometry. In practice, time gaps of days to weeks can occur between observations at a given location, limiting the ability to characterize peak inundation extent and the duration of near-peak conditions.

Here we present a data-driven assessment of observational requirements and remaining gaps for capturing near-peak flood conditions, and we evaluate how different satellite constellations perform against these requirements. The analysis is based on global flood map products generated by ICEYE during 2023–2025, complemented by a rich archive of multi-sensor satellite imagery, social media observations, river gauge records, and field measurements for event validation and timing constraints. We discretize flood evolution into a hexagonal (H3) grid and intersect time-stamped extents with grid cells to derive cell-scale inundation time series. Peak timing is constrained using hydrographs from multiple gauges per event. For each H3 cell, we estimate (i) maximum inundation extent, (ii) the timing of peak inundation, and (iii) the duration of near-peak conditions, yielding a spatially explicit “observability window” for peak impacts.

Using these empirically derived near-peak windows, we quantify the revisit cadence required to observe peak conditions with high likelihood and compare the resulting requirements with observation opportunities from public missions (Sentinel-1/2 and Landsat-8/9). We then assess the extent to which a large constellation of small imaging SAR satellites, exemplified by ICEYE, can close the remaining gaps in near-peak observability across diverse flood regimes, landscapes, and event dynamics. The resulting framework provides a transferable approach for evaluating current and planned satellite constellations for flood response and risk assessment, with direct implications for acquisition strategies and the design of future observing systems.

 

How to cite: Ardila, J., Anders, A., Jensen, K., Riihimäki, H., and Sidiropoulou-Velidou, D.: Capturing Peak Flood Conditions from Space: Empirical Revisit Requirements and Observational Gaps, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-4014, https://doi.org/10.5194/egusphere-egu26-4014, 2026.

11:30–11:40
|
EGU26-7216
|
ECS
|
Virtual presentation
Hongruixuan Chen, Jian Song, Junshi Xia, and Naoto Yokoya

Rapid, reliable building damage mapping (BDM) is essential for effective humanitarian response and disaster management. Although Earth Observation (EO) data availability and AI model design have advanced rapidly, systematic and standardized comparisons of methods for multimodal BDM remain scarce. As new architectures emerge at a fast pace, understanding their relative strengths and limitations on common benchmarks is crucial for operational deployment.

In this work, we leverage the BRIGHT dataset, a recent large-scale benchmark for multimodal BDM, to conduct a comprehensive evaluation of representative strategies spanning traditional machine learning, Convolutional Neural Networks (CNNs), Transformers, Mamba, and emerging foundation models. Our benchmarking shows that, despite their scale, general-purpose foundation models are still outperformed by specialized architectures in complex multimodal BDM settings. In particular, ChangeMamba, a state-of-the-art Mamba-based model, achieves the strongest overall performance on BRIGHT. 

To further assess robustness and transferability beyond the benchmark, we perform a cross-event transfer evaluation on a recent wildfire in Oita, Japan. The results demonstrate ChangeMamba’s superior generalization in real-world conditions compared with other baselines. Finally, our analysis reveals a key sensitivity in multimodal fusion: the choice of pre-event optical imagery substantially affects performance when transferring to unseen events, highlighting an important practical consideration for operational damage mapping.

How to cite: Chen, H., Song, J., Xia, J., and Yokoya, N.: ChangeMamba Meets BRIGHT: Benchmarking Multimodal Damage Mapping and Cross-Event Transfer to a Japanese Wildfire , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-7216, https://doi.org/10.5194/egusphere-egu26-7216, 2026.

11:40–11:50
|
EGU26-19597
|
ECS
|
On-site presentation
Philipp Barthelme, Corey Scher, Myscon Truong, He Yin, and Jamon Van Den Hoek
Reliable and timely damage assessments are critical in humanitarian and conflict settings. The event-based analysis of very high-resolution (VHR) optical imagery has been the predominant remote-sensing based method to achieve this, but remains constrained by limited temporal revisit, cloud cover, cost, and restricted spatial scalability. Interferometric Synthetic Aperture Radar (InSAR) coherence derived from Sentinel-1 offers a complementary, medium-resolution approach by enabling frequent, weather-independent observations over large areas, making it particularly suitable for near-real-time and retrospective damage monitoring. However, the potential of InSAR coherence time series remains underexplored, particularly in how it can be complemented by other sensors (e.g., optical imagery) and how it is affected by different built-up environment characteristics.
 
This study investigates large-scale conflict-related damage mapping across Gaza during 2023–2024 using Sentinel-1 InSAR coherence time series. We also integrate multiple data sources, including Sentinel-2 optical imagery, gridded weather re-analysis data, and built-up environment characteristics. Moreover, we generate embeddings of the Sentinel imagery using geospatial foundation models which we use as additional model inputs. Damage reference data are derived from UNOSAT damage assessments, which report damage at irregular intervals (~2-3 months) based on visual assessments of VHR optical imagery. To exploit the higher temporal frequency of Sentinel-1 acquisitions while accounting for the coarser temporal resolution of the reference data, we adopt a weakly supervised multiple instance learning framework and compare the predictive performance of our model across various combinations of input modalities.
 
The analysis aims to quantify the relative importance of different input modalities for damage detection, assess the added value of self-supervised representation learning, and identify inherent limitations related to site-specific, sensor-specific and damage-specific factors in Gaza. We further evaluate the utility of interval-based learning approaches for conflict damage monitoring, where precise damage timing is often unavailable.
 
By combining dense SAR time series, multimodal data fusion, and interval-aware learning, this work contributes a novel methodological perspective on large-scale damage assessment. The findings inform both the potential and limitations of InSAR-based damage mapping in humanitarian contexts, supporting future operational monitoring and post-event re-analysis workflows.

How to cite: Barthelme, P., Scher, C., Truong, M., Yin, H., and Van Den Hoek, J.: InSAR+: Exploring the utility of complementary data sources for mapping conflict damage using InSAR coherence time series, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-19597, https://doi.org/10.5194/egusphere-egu26-19597, 2026.

11:50–12:00
|
EGU26-21798
|
Highlight
|
On-site presentation
Ekbal Hussain, Rahul Chahel, Sophie Dorward, and Alessandro Novellino

In the first few hours of responding to natural hazards it is crucial to understand the size of the hazard event and the scale of the potential humanitarian emergency. This is important for the timely activation of appropriate aid and support mechanisms. For earthquakes the most reliable source of immediate scientific information is from the United States Geological Survey (USGS). Through their PAGER system the USGS provide rough estimates of potential fatalities and economic impact of major earthquake events (Jaiswal et al., 2010). However, these estimates lack spatial granularity in addition to age and gender disaggregation. We know that children, elderly and women are more prone to negative impacts in a disaster (e.g. Neumayer & Plümper 2007). Therefore, it is important to have a sense of these numbers as soon as possible to understand the potential scale of the emergency.

Additionally, a map of the potentially affected areas is important to understand the spatial distribution of the potential humanitarian need (e.g. isolated communities, road connectivity etc.). For example, following the 2015 Nepal earthquake the immediate acute needs of remote communities of western Nepal were initially overlooked. These communities faced severe isolation due to destroyed infrastructure, making aid delivery and access to basic supplies like food, water, and shelter challenging (The Asia Foundation, 2015). Mapping the potential exposed populations and their spatial distribution rapidly can help target appropriate emergency interventions sooner.

Here we present a tool developed by the British Geological Survey that automatically and in real-time extracts earthquake shaking information from the USGS and extracts statistics of the populations, disaggregated by age and gender, who will have been exposed to certain levels of shaking. We test how rapid exposures estimates, within 3 hours of an event, can capture the final losses in major earthquakes by using the 2023 Türkiye earthquakes as a case study.

We also demonstrate how we can estimate exposures and potentially compounding impacts of multiple hazards on populations following major earthquakes.

How to cite: Hussain, E., Chahel, R., Dorward, S., and Novellino, A.: Rapid age and gender disaggregated exposure assessment for earthquake emergencies, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-21798, https://doi.org/10.5194/egusphere-egu26-21798, 2026.

12:00–12:10
|
EGU26-9224
|
On-site presentation
Karolina Korzeniowska, Chiara Di Ciollo, Véronique Amans, Michael Vollmar, and Markus Probeck

Implemented by the European Space Agency (ESA) on behalf of the European Commission, the Rapid Response Desk (RRD) service provides a harmonized, reliable, and efficient 24/7 access to a variety of commercial satellite data at unprecedented speed, serving the demanding information and timing requirements of the Copernicus Services and other authorized EU entities and research projects. This presentation will showcase the RRD system which seamlessly connects to latest API-based ordering interfaces of 10 Copernicus Contributing Mission Entities (CCMEs), providing access to 19 active satellite missions and constellations, with further on-boardings planned as new missions become available. Among the main users, the Copernicus Emergency Management Service (CEMS) Rapid Mapping utilize the RRD infrastructure and services to obtain up-to date satellite data acquisitions for their worldwide disaster response activities over areas affected by natural and man-made hazards, such as floods, wildfires, earthquakes, etc. The RRD enables users to quickly access the extensive archives of already acquired very-high resolution optical, radar, and atmospheric-composition data, as well as to request tailored new acquisitions anywhere in the world at very high spatial, spectral, and temporal resolution, in cooperation with key European, US and Canada-based, commercial satellite data providers. This way the RRD provides access to an essential complement to the Copernicus high-resolution Sentinel missions systematic data offer. Making use of these satellite images, CEMS performs near-real time events monitoring and disaster impact assessments, generating accurate time-critical maps and value-added products that are critical to emergency response coordination on the ground. The RRD offers various flexible sensing scenarios for new imagery acquisitions: from real-time tasking of instant single image capturing, to multiple contiguous acquisitions covering large areas, and systematic area monitoring over long time periods. Likewise, RRD users can quickly access the full range of satellite data stored in the CCMEs’ archives, offering valuable references for pre-event situation validation and post-event damage detection and impact assessment. All ordered satellite images are delivered in a standardized data package format together with harmonized metadata, thus substantially facilitating the integrated use of multi-sensor, multi-platform data. The users can retrieve the data from a single RRD access point, overcoming the diversity of the various individual CCMEs’ ordering systems and allowing them to save time for any time-critical disaster analyses. In addition, an archive of all non-sensitive satellite imageries ordered by RRD users is maintained, allowing cost-efficient data re-use by eligible Copernicus users. In summary, the RRD constitutes a big leap forward in near-real time access to satellite remote sensing data for worldwide large-scale disaster monitoring and response operations.

How to cite: Korzeniowska, K., Di Ciollo, C., Amans, V., Vollmar, M., and Probeck, M.: Rapid Response Desk – Near-real time access to multi-mission satellite data for emergency response, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-9224, https://doi.org/10.5194/egusphere-egu26-9224, 2026.

12:10–12:20
|
EGU26-9888
|
ECS
|
On-site presentation
Sima Shakiba, Reza Taherdangkoo, Jörn Wichert, and Christoph Butscher

Wildfire hazard assessment increasingly relies on machine learning models trained on large-scale remote sensing and geospatial datasets. However, the limited transparency and uncertainty awareness of many data-driven approaches hinder their operational use and trustworthiness for decision-making. In this study, we propose an interpretable and uncertainty-aware wildfire hazard assessment framework that integrates fuzzy logic preprocessing, histogram-based gradient boosting (HGB), and artificial intelligence.
Multiple environmental, climatic, topographic, vegetation, geological, and anthropogenic variables derived from remote sensing and GIS sources are transformed into continuous fuzzy membership functions to explicitly represent gradual transitions and inherent uncertainties in wildfire-related drivers. The HGB model is employed to efficiently handle high-dimensional raster data and to produce probabilistic wildfire susceptibility estimates. Model interpretability is ensured using SHAP, which quantifies the contribution and direction of each predictor to wildfire probability, enabling transparent interpretation of model behaviour. In addition, predictive uncertainty is quantified through an ensemble approach, highlighting spatial patterns of confidence and disagreement among model predictions.
Results demonstrate strong discriminative performance while revealing physically meaningful relationships, with precipitation acting as the dominant suppressor of wildfire probability, and fuel availability, temperature, and wind emerging as key amplifying factors. The proposed framework enhances model transparency, interpretability, and reliability, supporting trustworthy wildfire hazard assessment and decision-making for risk mitigation and resource allocation.

How to cite: Shakiba, S., Taherdangkoo, R., Wichert, J., and Butscher, C.: Uncertainty-Aware Wildfire Hazard Assessment Using Machine Learning, Fuzzy Logic, and Remote Sensing Data, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-9888, https://doi.org/10.5194/egusphere-egu26-9888, 2026.

12:20–12:30
|
EGU26-11485
|
ECS
|
On-site presentation
Shao-Ming Lu and Szu-Yun Lin

Global disasters are becoming increasingly frequent, leading to persistent and widespread impacts on human safety, critical infrastructure, and economic activities. Therefore, emergency response and recovery decisions urgently require rapid, large-area, and reliable situational awareness. Owing to its wide coverage and timely availability, satellite-based remote sensing has become an important data source for post-disaster assessment. However, post-event observations are often missing or degraded due to harsh on-site conditions, particularly weather- and cloud-related interference, which introduces substantial uncertainty in damage interpretation. In addition, approaches that rely solely on a single data source or manual interpretation are constrained by limited timeliness and scalability, making it difficult to provide consistent and stable damage information when it is most needed. Meanwhile, damage is not only reflected by visible appearance changes. Visual evidence alone may be insufficient to capture building-level vulnerability, construction characteristics, and damage mechanisms that are not directly observable from imagery. In practice, building-level metadata are often scarce, heterogeneous, and unevenly available across regions and events. As a result, such information is rarely incorporated into existing damage assessment pipelines, which can limit the interpretability of model outputs and reduce confidence in their use for decision support.

This study proposes a Transformer-based multimodal framework for building damage assessment that integrates post-disaster optical imagery, SAR imagery, and building metadata to generate timely and explainable damage information. To strengthen operational applicability, the proposed approach is further evaluated on real-world ㄍcases from major disasters worldwide. Experimental results indicate that tokenizing heterogeneous multimodal inputs into a unified sequence representation substantially enhances architectural flexibility for cross-modality integration. Compared with conventional approaches that typically cascade or couple multiple modality-specific models to handle different data sources, our framework performs multi-source fusion within a consistent representation space and enables a simpler end-to-end design. Through multi-source data fusion and explainable analysis, the proposed framework improves the transparency and traceability of post-disaster building damage assessment, provides a more comprehensive characterization of damage conditions, and supports more robust, evidence-based response and recovery decision-making.

How to cite: Lu, S.-M. and Lin, S.-Y.: Enhancing Post-Disaster Building Damage Interpretation with Multisource Data Fusion, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-11485, https://doi.org/10.5194/egusphere-egu26-11485, 2026.

Posters on site: Tue, 5 May, 14:00–15:45 | Hall X3

The posters scheduled for on-site presentation are only visible in the poster hall in Vienna. If authors uploaded their presentation files, these files are linked from the abstracts below.
Display time: Tue, 5 May, 14:00–18:00
Chairpersons: Yao Sun, Wei Chen, Ioanna Ilia
X3.100
|
EGU26-7601
Christian Geiß, Elias Andersch, Manuel Huber, and Hannes Taubenböck

We investigate the spatial and temporal dynamics of destruction across the Gaza Strip during the Middle East conflict that escalated sharply after the Hamas incursion into southern Israel on 7 October 2023 and subsequent Israeli airstrikes. Leveraging Synthetic Aperture Radar time series compiled from Sentinel-1 imagery, we derive data-driven assessments of conflict-related damage in an exceptionally hostile and data-scarce environment. Our primary objectives are to map the distribution of destroyed structures and reconstruct the timeline of damage progression. We employ coherence loss analysis to identify structural damage based on the satellite-derived temporal signatures. The workflow encompassed systematic data preprocessing, spatial analysis, and result validation against UNOSAT datasets to ensure reliability.

Pre-conflict analysis indicated that more than half of all structures were undamaged or only lightly affected, with 31% showing major damage. By late 2023, this distribution had shifted markedly: the proportion of undamaged or lightly affected buildings dropped to 22%, while severely damaged structures rose to 32% and completely destroyed buildings accounted for 10%. The damage further intensified through mid-2025, with severely damaged and destroyed buildings collectively representing over 80% of all assessed structures—highlighting a sustained and accelerating pattern of devastation.

The analysis reveals that the entire Gaza Strip experienced extensive structural loss, with densely populated urban areas emerging as persistent damage hotspots. By May 2025, all five districts displayed comparable destruction levels, though with distinct temporal trajectories. The near-total absence of intact or lightly damaged structures in multiple urban cores underscores the systematic and prolonged nature of bombardments, reflecting a transformation of the urban fabric unprecedented in recent conflict-driven damage assessments.

How to cite: Geiß, C., Andersch, E., Huber, M., and Taubenböck, H.: Sentinel-1 coherence loss analysis for damage assessment in conflict areas: Evidence from the Gaza strip following October 7th 2023, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-7601, https://doi.org/10.5194/egusphere-egu26-7601, 2026.

X3.101
|
EGU26-9181
|
ECS
Chun-Jia Huang and Yu-Chih Cho

Rapid situational awareness is essential for seismic resilience in tectonically active regions such as Taiwan. For critical maritime infrastructure, traditional post-earthquake reconnaissance is often constrained by limited accessibility and safety concerns, leading to delays in disaster response. This study presents an automated disaster monitoring framework that integrates UAV remote sensing and Geospatial Artificial Intelligence (GeoAI) to quantify seismic impacts on wharf facilities. High-resolution aerial imagery and multi-temporal geospatial data are combined to establish a processing pipeline for identifying disaster footprints, with particular attention to the spatial distribution of structural fissures and surface deformations. A YOLOv11-based deep learning model is employed for automated damage detection and segmentation. To enable quantitative assessment, morphological skeletonization and three-dimensional spatial analysis are applied to derive geometric characteristics of damage features. The extracted information is further used to compute the Pavement Condition Index (PCI) as an indicator of facility serviceability. Experimental results show that the proposed framework achieves mAP and Recall values exceeding 90%, with a spatial localization accuracy of ±2 cm. The results demonstrate the capability of the proposed approach to reduce the time required for post-earthquake damage assessment and to support disaster monitoring and infrastructure management in seismically active maritime environments.

How to cite: Huang, C.-J. and Cho, Y.-C.: Integration of UAV remote sensing and GeoAI for rapid post-earthquake disaster monitoring, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-9181, https://doi.org/10.5194/egusphere-egu26-9181, 2026.

X3.102
|
EGU26-13368
|
ECS
Nikolaos Madonis, Athanassios Ganas, and Paraskevas Tsangaratos

Landslides in the Western Corinth Rift reflect a mix of long-term “set-up” conditions—such as terrain, rock type, and fault-related structure—and short-term triggers linked to transient deformation and changing rainfall patterns. To represent these interacting processes in a clear and interpretable way, we propose a two-phase, multi-scale landslide susceptibility workflow based on explainable XGBoost. At Phase 1 (watershed scale) we develop a baseline susceptibility model using a standardized set of conditioning factors. These include (i) terrain and geomorphometric variables (elevation, slope, aspect, profile curvature, plan curvature and topographic wetness index (TWI) and (ii) lithological and structural controls (lithology and hydrolithology classes, distance from river network and fault-influence proxies such as distance to faults). The model is trained using historical landslide inventories, whereas interpretability was built in through explainable AI tools, such as SHAP, allowing us to quantify both global and site-specific contributions of conditioning factors, including key interactions. The result is a set of susceptibility maps paired with readable diagnostics that explain why certain areas are critical. At Phase 2 (local refinement and activity confirmation) focuses on the Krini–Gkrekas–Pititsa sector, where observations are denser and more reliable. Here, we evaluate whether susceptibility hotspots from Phase 1 align with evidence of ongoing or emerging instability. We add dynamic indicators and independent validation using: European Ground Motion Service InSAR ground motion, SBAS historical InSAR data; GNSS trend metrics and antecedent precipitation indices from station data. The goal is not just to refine local interpretation, but to test whether predicted patterns make physical sense, by checking consistency between (a) areas predisposed by lithology and structure and (b) present-day deformation signals and rainfall forcing. The workflow aims to produce decision-ready, interpretable outputs at two complementary scales: (1) watershed-scale susceptibility that highlights where failures are more likely based on relatively stable controls, and (2) a localized assessment that strengthens confidence where susceptibility coincides with measured deformation and hydrometeorological conditions. This improves trust and usability of AI-assisted landslide hazard assessment in tectonically active landscapes.

Keywords

Landslide susceptibility; XGBoost; explainable AI; SHAP; multi-scale modeling; watershed analysis; lithology; active faults; EGMS; InSAR ground motion; GNSS; antecedent precipitation index; Western Corinth Rift.

How to cite: Madonis, N., Ganas, A., and Tsangaratos, P.: Multi-Scale, Explainable XGBoost Landslide Susceptibility Mapping: From Watershed-Scale Controls to EGMS–GNSS–Rainfall Validation of Active Instabilities in the Western Corinth Rift, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-13368, https://doi.org/10.5194/egusphere-egu26-13368, 2026.

X3.103
|
EGU26-9970
|
ECS
Beatriz Ferreira, Camila Viana, Rebeca Coelho, Carlos Henrique Grohmann, Alexander Brenning, and Florian Strohmaier

Understanding the influence of topographic parameters on landslide susceptibility (LSM) is crucial for risk management in regions where landslides are recurrent and potentially catastrophic. Although landslides are often triggered by short-term external forcings such as intense rainfall, the expansion of human settlements onto steep slopes greatly amplifies their impacts, making prediction and mitigation increasingly urgent and challenging.

The Serra do Mar is a mountain chain extending over 1,500 km along the southeastern coast of Brazil, separating the inland plateau from the coastal plain and characterized by rugged relief strongly controlled by geological structures, including faults and steep escarpments. High seasonal rainfall combined with intense weathering makes this region naturally prone to landslides, as dramatically illustrated in February 2023, when extreme rainfall triggered widespread slope failures in the municipality of São Sebastião (São Paulo State), causing severe damage and loss of life.

Despite the importance of such events, traditional landslide susceptibility mapping approaches, largely based on field surveys and geotechnical analyses, are costly and time-consuming. Remote sensing combined with explainable machine learning offers a powerful alternative for large-scale spatial hazard assessment.

This study investigates how different Digital Elevation Model (DEM) resolutions affect predictive landslide susceptibility modeling using machine learning and explainable artificial intelligence (XAI) techniques. A multiscale set of topographic predictors was derived from airborne lidar and Copernicus DEMs. These predictors were integrated with a landslide inventory from the February 2023 event (1,070 mapped scars), which served as the reference dataset for training and spatially validating Random Forest susceptibility models, enabling a direct comparison of how different DEM resolutions reproduce observed landslide patterns. Model interpretability was then assessed using SHAP (Shapley Additive Explanations) to quantify scale effects and the relative contribution of topographic controls on landslide susceptibility.

How to cite: Ferreira, B., Viana, C., Coelho, R., Grohmann, C. H., Brenning, A., and Strohmaier, F.: Influence of topographic parameters on landslide susceptibility using machine learning: A case study in the municipality of São Sebastião, São Paulo, Brazil., EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-9970, https://doi.org/10.5194/egusphere-egu26-9970, 2026.

X3.104
|
EGU26-13408
|
ECS
Carla Mae Arellano, Daniel Hölbling, Elena Nafieva, Jachin Jonathan van Ek, Stéphane Henriod, Yann Rebois, Albert Schwingshandl, Sarah Forcieri, Raimund Heidrich, Isabella Hörbe, and Lorena Abad

Machine learning approaches are increasingly applied to landslide susceptibility mapping. Despite their growing use, limited insight into model behavior and variable influence remains a major challenge, particularly in data-scarce settings where inventories are incomplete and input data are heterogeneous. 

This study explores how explainability methods can be used to analyze and interpret machine learning-based landslide susceptibility models. First, a landslide susceptibility dataset is constructed by combining an available landslide inventory with commonly used environmental conditioning factors. These include topographic data (e.g. elevation, slope, curvature, flow accumulation), proximity variables (e.g. distance to rivers and roads), and land cover or vegetation proxies derived from Earth Observation (EO) data, such as the Normalized Difference Vegetation Index (NDVI). Our focus is on understanding how different input variables influence model predictions and how these influences vary spatially.  

For this, explainability techniques are applied to assess variable importance and spatial patterns in model responses. Feature attribution methods such as SHapley Additive exPlanations (SHAP) are used to quantify the contribution of individual conditioning factors at both the global model level and locally in space. The results are examined for consistency with established geomorphological understanding, and sensitivities related to data limitations, inventory characteristics, and sampling strategies are identified. 

This study provides insight into the strengths and limitations of machine learning-based landslide susceptibility modelling in data-scarce contexts and demonstrates how explainability can support more transparent and critically assessed susceptibility analyses. This work contributes to the development of interpretable susceptibility mapping approaches suited to preparedness and decision-support applications. 

How to cite: Arellano, C. M., Hölbling, D., Nafieva, E., van Ek, J. J., Henriod, S., Rebois, Y., Schwingshandl, A., Forcieri, S., Heidrich, R., Hörbe, I., and Abad, L.: Interpreting landslide susceptibility models using explainable machine learning, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-13408, https://doi.org/10.5194/egusphere-egu26-13408, 2026.

X3.105
|
EGU26-21505
Paraskevas Tsangaratos, Ioannis Matiatos, Ioanna Ilia, and Konstantinos Markantonis

Groundwater pollution is a persistent, largely hidden risk in Mediterranean farming basins such as Western Thessaly (Greece), where heavy irrigation, seasonal recharge pulses, and highly variable geology can speed up the movement of contaminants from the land surface into aquifers, making intrinsic vulnerability maps essential for early warning, land-use decisions, and risk-aware governance; however, the widely used DRASTIC index—despite its practicality—relies on fixed weights and linear scoring, which limits its ability to capture nonlinear relationships and changing, time-dependent exposure. To overcome these constraints, we present a hybrid, explainable framework that strengthens the classic DRASTIC structure by introducing an eighth factor, Transit Time (TT), and pairing the resulting parameter set with a tree-based machine learning approach—centered on Random Forest (RF)—to improve predictive skill, spatial detail, and interpretability. We build and compare four configurations: a baseline 7-parameter DRASTIC map (DRASTIC A), an extended DRASTIC map with TT (DRASTIC B), an RF model trained on the original seven DRASTIC layers (RF A), and an RF model trained on the seven layers plus TT (RF B). The models draw on thematic raster layers (e.g., depth to groundwater, recharge, soil, aquifer media, vadose zone characteristics) sampled at nitrate monitoring locations, with TT included as a practical proxy for travel-time delay and attenuation processes that influence when and how strongly pollution signals reach the aquifer. Because spatial autocorrelation can inflate performance when using ordinary random splits, we adopt spatial cross-validation (block- and buffer-based schemes) to better test real-world transferability, address class imbalance with SMOTE, and evaluate outcomes using accuracy, F1-score, class-wise precision/recall, ROC-AUC, and confusion matrices, with special attention to correctly identifying high and very-high vulnerability areas. Among all approaches, RF B performs best (accuracy 0.8214; F1 0.8788), indicating that the combination of nonlinear learning and transit-time information yields clearer, more reliable discrimination of vulnerable zones than either index mapping alone or RF without TT. To make the models transparent and defensible for stakeholders, we apply explainable AI methods—permutation importance and SHAP—to reveal both overall driver rankings and local, pixel-level contributions; consistently, depth to groundwater, vadose zone influence, and recharge stand out as the strongest controls, while TT, although not always dominant in global importance, meaningfully sharpens the spatial tracing of vulnerable corridors and pathways. Finally, to support risk-informed planning under uncertainty, we produce confidence maps based on maximum predicted class probability and normalized entropy maps that summarize ambiguity across classes, clearly separating areas where the model is both confident and vulnerable from areas where predictions are uncertain and additional monitoring or field verification is justified; these layers are masked for nodata regions and designed for direct integration into management workflows. Overall, the proposed Random Forest–DRASTIC–Transit Time framework demonstrates how a spatially validated, explainable ML extension of DRASTIC can deliver more detailed, decision-ready vulnerability maps by blending static hydrogeologic controls with dynamic travel-time behavior, offering a scalable pathway for more sustainable groundwater protection as environmental pressures intensify.

How to cite: Tsangaratos, P., Matiatos, I., Ilia, I., and Markantonis, K.: Explainable Machine Learning for Spatio-Temporal Groundwater Vulnerability Mapping: A Random Forest-DRASTIC-Transit Time Framework for Western Thessaly, Greece, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-21505, https://doi.org/10.5194/egusphere-egu26-21505, 2026.

X3.106
|
EGU26-23030
Yao Sun, Ahmed Abdelsalama, Xizhe Xue, Patrick Aravena Pelizari, and Christian Geiß

Detailed information on building attributes, such as construction materials and structural types, is a fundamental prerequisite for accurate natural hazard risk assessment. Recent deep learning approaches based on convolutional neural networks (CNNs) have demonstrated the effectiveness of extracting such exposure-related information from street-level imagery, establishing a solid foundation for data-driven building characterization.

This study is motivated by the emerging capabilities of vision language models (VLMs), which leverage large-scale pretraining and generalized visual semantic reasoning to provide a unified framework for interpreting complex urban scenes. To assess their effectiveness in structural exposure modeling, we conducted comparative experiments using zero-shot inference and fine-tuning strategies. The dataset consists of over 29,000 annotated street-level façade images from the earthquake-prone region of Santiago, Chile.

The zero-shot results indicate that general-purpose off-the-shelf VLMs (e.g., InternVL2-8B) struggle to accurately infer complex structural engineering attributes due to insufficient domain-specific knowledge. In contrast, fine-tuning based on InternVL3-2B yields a substantial performance improvement: the model achieves high accuracy in building height estimation (90.6%) and roof shape classification (87.0%), and demonstrates strong performance in predicting lateral load-resisting system materials (78.8%) and complex seismic building structural types (SBST, 72.6%). These results suggest that, fine-tuned VLMs can effectively acquire domain expertise, enabling scalable and low-cost exposure modeling. Future work will further investigate the potential of VLMs to infer latent structural characteristics through semantic reasoning.

How to cite: Sun, Y., Abdelsalama, A., Xue, X., Aravena Pelizari, P., and Geiß, C.: Vision-Language Models for Structural Exposure Modeling from Street-Level Imagery, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-23030, https://doi.org/10.5194/egusphere-egu26-23030, 2026.

X3.107
|
EGU26-9306
|
ECS
Chenzuo Ye and Takashi Oguchi

The Noto Peninsula, Japan, experienced two strong earthquakes within a short interval of approximately eight months in 2023 and 2024; the first event triggered only a limited number of landslides (28), whereas the second event resulted in widespread slope failures, with more than 2,300 landslides identified. This rare sequence provides a unique opportunity to investigate how landslide susceptibility and triggering mechanisms evolve under repeated seismic loading within the same tectonic and geomorphological setting. However, conventional landslide susceptibility studies typically treat successive earthquakes as independent events, overlooking the potential influence of prior seismic damage on subsequent slope failures.

 

In this study, we propose an interpretable, SHAP-based machine learning framework to analyze the temporal evolution of earthquake-induced landslide susceptibility during the 2023–2024 Noto earthquake sequence. An XGBoost model was first trained using landslide data from the 2023 event, during which landslide occurrences were sparse, and transfer learning was employed to enhance model robustness under small-sample conditions. SHAP-based interpretation indicates that landslide susceptibility in 2023 was primarily controlled by topographic and long-period seismic factors, with the top five contributors being elevation, surface roughness, slope gradient, long-period spectral acceleration (PSA at 3.0 s), and the topographic position index (TPI), reflecting a preconditioning process that brought slopes close to instability. The resulting susceptibility map was then compared with the spatial distribution of landslides triggered by the 2024 earthquake, revealing a pronounced spatial overlap between the 2023 high-susceptibility (potentially unstable) zones and the 2024 observed landslide locations. In contrast, SHAP analysis for the 2024 event shows a shift in dominant controlling factors toward roughness, peak ground velocity (PGV), TPI, mid-period spectral acceleration (PSA at 1.0 s), and slope gradient, indicating a release process in which pre-weakened slopes were driven beyond their stability thresholds by stronger and more velocity-dominated ground motion.

 

The results indicate a pronounced spatial correspondence between high-susceptibility areas identified after the 2023 earthquake and landslide occurrences in 2024, with a lift value of 2.80 for the top 5% susceptibility class. SHAP-based interpretation reveals a clear transition in dominant triggering factors between the two events. In 2023, landslide susceptibility was primarily controlled by long-period ground motion and topographic framework, reflecting a preconditioning process that brought slopes close to failure. In contrast, the 2024 earthquake activated widespread landslides through velocity-related and mid-period seismic components, representing a release process that pushed pre-weakened slopes beyond their stability thresholds.

 

These findings demonstrate that earthquake-induced landslides in the Noto Peninsula may follow a slope-state–controlled evolutionary pattern, in which earlier seismic events systematically modify slope conditions and strongly influence the spatial and mechanistic characteristics of subsequent failures. This study highlights the importance of incorporating inter-event interactions into landslide susceptibility modeling and provides new insights for post-earthquake hazard assessment in regions affected by sequential seismic events.

How to cite: Ye, C. and Oguchi, T.: Legacy Effects of Earthquake-Induced Landslides under Sequential Seismic Events:SHAP-Based Interpretation of Preconditioning and Release Processes during the 2023–2024 Noto Earthquakes, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-9306, https://doi.org/10.5194/egusphere-egu26-9306, 2026.

X3.108
|
EGU26-13884
|
ECS
Mohamed Abdelkader, Dávid Abriha, and Árpád Csámer

Landslides are one of the most destructive natural hazards, causing significant loss of life, extensive damage to infrastructure, and long-term disruption to socioeconomic development, particularly in rapidly urbanizing regions. Consequently, accurate landslide susceptibility mapping is a critical tool for effective hazard assessment and risk management. Although the extensive use of machine learning algorithms for landslide susceptibility mapping, the black-box nature of the models often limits the acceptance of model results by decision-makers. This study presents an explainable artificial intelligence framework for landslide susceptibility mapping that integrates SHapley Additive exPlanations (SHAP) with Recursive Feature Elimination (RFE) to optimize ensemble machine learning models. The proposed framework was tested on an arid and rapidly developing region in East Cairo, Egypt. A landslide inventory of more than 180 events was compiled from field surveys and satellite imagery, and fourteen conditioning factors representing topographic, geological, and anthropogenic controls were initially considered. Unlike traditional feature selection approaches that rely mainly on statistical importance, the proposed framework selects predictors based on their physical and geological contribution to slope instability. The results show that SHAP-based feature selection significantly reduces model complexity while maintaining high predictive performance, with only five predictors for Random Forest and nine for XGBoost. Beyond predictive performance, the framework provides clear physical and geological explanations for slope failure processes. SHAP interaction analysis identified two dominant instability mechanisms: human-induced factors within a 200 m buffer around the road cuts, as well as structural instability on slopes with orientations ranging from 225° to 320°, as expected from kinematic conditions for daylighting within the area of study. These findings demonstrate that explainable AI can move beyond black-box prediction by linking machine learning outputs to geological ground truth. Overall, this proposed framework offers a practical and interpretable tool for landslide hazard assessment and sustainable land-use planning, particularly in data-scarce and rapidly developing environments.

Keywords: Explainable AI, SHAP, feature selection, landslide susceptibility, Hazard assessment

How to cite: Abdelkader, M., Abriha, D., and Csámer, Á.: From Black-Box Predictions to Trustworthy Landslide Susceptibility Mapping Using Explainable AI, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-13884, https://doi.org/10.5194/egusphere-egu26-13884, 2026.

X3.109
|
EGU26-22979
|
ECS
|
Virtual presentation
Aikaterini-Alexandra Chrysafi

Landslide susceptibility mapping is widely used for risk reduction, yet many high-performing deep models remain hard to interpret and rarely communicate where predictions are reliable. We present an explainable, confidence-mapped workflow that combines remote sensing/GIS-derived conditioning layers with modern deep tabular architectures (FT-Transformer, ResMLP, and TabNet). To test the developed methodology, a case-study area in the Regional Unit of Magnesia (Zagora–Mouresi, Greece) was selected. Conditioning factors describing terrain, hydrology, proximity, and geology are Frequency Ratio–weighted, then used to train probabilistic susceptibility models evaluated with discrimination and calibration metrics. Spatial confidence is mapped using normalized predictive entropy to identify zones where susceptibility estimates are less decisive. Explainability is achieved with SHapley Additive exPlanations (SHAP), consistently highlighting elevation as the dominant control, followed by aspect, with lithology and slope also exerting strong influence; proximity to the river network and faults and curvature-related metrics contribute secondarily. The resulting susceptibility and confidence products improve transparency for decision support and provide a scalable template for large-area hazard assessment.

How to cite: Chrysafi, A.-A.: Explainable, Confidence-Mapped Deep Learning for Remote-Sensing–Driven Landslide Susceptibility Mapping, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-22979, https://doi.org/10.5194/egusphere-egu26-22979, 2026.

X3.110
|
EGU26-13768
|
ECS
|
Virtual presentation
Dimitrios Papadomarkakis, Maria-Sotiria Frousiou, and Alexandros Notas

Flooding remains among the most damaging hydro-meteorological hazards in the Mediterranean. On Euboea Island (Greece), steep terrain, ongoing land-use change, and highly connected transport corridors can intensify both flood occurrence and potential consequences. This study presents an integrated, decision-oriented framework that jointly maps (a) flood susceptibility and (b) flood impact potential (exposure), combining Google Earth Engine (GEE) for predictor generation with Python-based machine learning, explainability, and uncertainty analytics.

A multi-source predictor database is assembled in GEE from satellite and ancillary datasets to represent key topographic, climatic, geological, and pedological controls on flooding. Terrain and morphometric predictors are derived from the ALOS 12.5 m DEM, including elevation, slope angle, plan and profile curvature, Topographic Wetness Index (TWI), and Topographic Position Index (TPI). Hydrologic connectivity is captured through distance to the river/stream network. Climatic forcing is represented using the Modified Fournier Index (MFI) from WorldClim v2.0 as a predictor variable for rainfall influence. Subsurface controls are incorporated via lithology (geological map) and topsoil texture (LUCAS database; sand, silt, and clay content), which modulate infiltration, storage, and runoff generation. Land-surface conditions affecting runoff are characterized using CORINE Land Cover 2018, reflecting vegetation cover and imperviousness patterns. In parallel, exposure is quantified using land-use intensity, building footprint/coverage metrics, and road-network descriptors (density, proximity, connectivity) to identify areas where flood impacts are likely to be most severe.

Flood occurrence labels are derived from an event inventory, and spatially explicit sampling and partitioning are applied to reduce spatial autocorrelation and improve generalization. Susceptibility is modeled using tree-based ensembles (Random Forest and XGBoost), trained and evaluated in Python with spatial cross-validation and metrics capturing both discrimination and reliability (AUC, F1/TSS, Brier score, and calibration diagnostics). To explicitly communicate confidence and reveal spatial weaknesses, we generate uncertainty and entropy maps: (a) predictive uncertainty estimated from ensemble dispersion and calibrated probabilities, and (b) Shannon entropy of class probabilities to highlight ambiguous transition zones, data-sparse areas, and geomorphologically heterogeneous corridors. Explainability is delivered via SHAP (global and local), supported by interaction and partial dependence analyses to identify dominant controls and to attribute exposure hotspots to drivers such as building and road concentration.

The resulting susceptibility, exposure/impact, and uncertainty–entropy maps provide transparent, decision-relevant information to support mitigation prioritization and strengthen trustworthy flood-risk screening on Euboea Island.

Keywords: flood susceptibility; exposure; impact mapping; Google Earth Engine; Python; tree-based ensembles; uncertainty; predictive entropy; SHAP; explainable AI; Euboea; Greece

 

How to cite: Papadomarkakis, D., Frousiou, M.-S., and Notas, A.: Explainable, Uncertainty-Aware Flood Susceptibility and Impact Mapping on Euboea Island (Greece) Using Google Earth Engine and Tree-Based Ensemble Models, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-13768, https://doi.org/10.5194/egusphere-egu26-13768, 2026.

X3.111
|
EGU26-14058
|
ECS
Aniseh Saber, Claudia De Luca, Ali Pourzangbar, and Michelle L. Bell

Heatwaves represent one of the most severe climate-related threats to European cities, where their impacts are intensified by urban heat island effects, aging populations, and uneven access to cooling resources and green infrastructure. Although heat-related risks are increasingly acknowledged in urban policy, many existing assessment frameworks continue to rely on conventional formulations that combine hazard, exposure, and vulnerability, grounded in the Intergovernmental Panel on Climate Change (IPCC). Such approaches inadequately capture the complex and dynamic interactions among climate processes, urban morphology, and socio-demographic vulnerability, thereby limiting their usefulness for designing locally targeted and context-specific adaptation strategies.

This study presents a spatiotemporal machine-learning framework for assessing heatwave risk in Bologna, Italy, following the IPCC risk concept. High-resolution environmental, infrastructural, and socio-demographic datasets covering the period 2014–2023 were compiled at the census-tract level. A Long Short-Term Memory (LSTM) neural network was developed to capture temporal dependencies in heatwave risk and optimized using the Hippopotamus Optimization Algorithm to improve predictive performance. The model integrates diverse set of 14 climatological, demographic, economic, and environmental indicators.

Examination of the results indicates a strong spatial agreement between observed and predicted heatwave risk patterns, with classification accuracies exceeding 77% for both low- and high-risk categories. Explainability analysis based on Partial Dependence Plots identifies temperature, vegetation cover, proximity to cooling and healthcare facilities, and the density of elderly female populations as the most influential determinants of heatwave risk. Future projections under RCP 4.5, 6.0, and 8.5 scenarios suggest a substantial expansion of high and very high heatwave risk classes by 2050. This expansion is most pronounced under the RCP 8.5 scenario, where areas classified as very high risk increase from approximately one-third of the urban area to nearly two-thirds.

The findings further highlight the mitigating role of urban green infrastructure, showing that higher vegetation density and improved proximity to green spaces can substantially reduce heatwave risk, albeit with spatially uneven benefits. By combining predictive capability with transparent interpretation, this framework offers practical, fine-scale evidence to support climate adaptation, nature-based solutions, and more equitable heat-resilient urban planning.

How to cite: Saber, A., De Luca, C., Pourzangbar, A., and L. Bell, M.: Mapping Urban Heatwave Risk with Explainable Spatiotemporal AI: Evidence from Bologna under Climate Change Scenarios, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-14058, https://doi.org/10.5194/egusphere-egu26-14058, 2026.

X3.112
|
EGU26-6499
|
ECS
Alexandros Notas, Maria-Sotiria Frousiou, and Dimitrios Papadomarkakis

Wildfires in Mediterranean ecosystems are increasing in frequency, extent, and severity under the combined influence of climate change and human pressure. This trend is intensifying the need for operational hazard products that are not only accurate, but also transparent, auditable, and easy to justify to decision-makers. Here we present an open-access, fully reproducible remote-sensing workflow for (i) pre-fire danger mapping and (ii) post-fire burn severity assessment, explicitly designed around explainability rather than black-box prediction. The workflow is implemented in Google Earth Engine using only freely available data sources: Sentinel-2 and Landsat 8 optical imagery, ERA5-Land meteorological reanalysis, and OpenStreetMap ancillary layers.

Post-fire impacts are standardized through NBR (Normalized Burn Ratio) and dNBR (difference Normalized Burn Ratio), converted into burn-severity classes using established USGS-style thresholds. Pre-fire danger is mapped using a physically interpretable, rule-based score derived from six binary, pixel-level indicators representing necessary conditions for elevated danger: (1) fuel availability (vegetation presence), (2) fuel dryness (SWIR-based moisture proxies), (3) heat (2 m temperature and/or LST), (4) atmospheric dryness (relative humidity), (5) wind speed, and (6) antecedent moisture deficit (recent precipitation/soil moisture). This structure provides built-in explainability, because each pixel’s class is directly traceable to the specific conditions that triggered it.

We demonstrate the workflow through a comparative analysis across four major Greek wildfire contexts, Attica, Euboea, Rhodes, and Evros, spanning different seasons and synoptic regimes. Using consistent pre-fire (multi-week) and post-fire compositing windows, we quantify how danger conditions co-occur prior to ignition, assess concordance between high-danger classes and observed fire perimeters, and relate pre-fire signatures to subsequent dNBR patterns, including differences associated with fuel structure, topography, and human exposure (proxied by proximity to roads and settlements from OpenStreetMap).

To move beyond qualitative map interpretation, we complement the rule-based danger score with two lightweight, fully explainable modeling layers that quantify driver effects and test cross-region generalization. First, we fit generalized additive models (GAMs) using continuous satellite- and reanalysis-derived predictors to recover nonlinear response curves and threshold-like behavior. Second, we use a hierarchical ordinal logistic regression in which baseline levels and selected driver effects can differ by region, enabling us to identify which driver–severity relationships are consistent across Mediterranean landscapes and which are site-specific.  We keep the models fully interpretable by reporting GAM response curves and logistic-regression odds ratios (with uncertainty), so predicted danger can be directly linked to physical drivers rather than opaque feature-importance scores. We generate all satellite/reanalysis-derived layers and danger/severity maps in Google Earth Engine, then export pixel-level predictor and outcome samples to fit the GAM and hierarchical logistic models in open-source Python, enabling transparent estimation of driver effects with uncertainty. Finally, we evaluate transferability using leave-one-region-out validation to identify where learned driver–danger relationships remain robust under differing regimes and where localized recalibration may be required for operational deployment.

Keywords

wildfire danger; burn severity; Google Earth Engine; Sentinel-2; Landsat 8; ERA5-Land; dNBR; generalized additive models; hierarchical logistic regression; explainable AI; transparent hazard mapping; Mediterranean ecosystems

How to cite: Notas, A., Frousiou, M.-S., and Papadomarkakis, D.: An Open and Explainable Google Earth Engine Workflow for Wildfire Danger and Burn Severity Mapping in Mediterranean Ecosystems, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-6499, https://doi.org/10.5194/egusphere-egu26-6499, 2026.

X3.113
|
EGU26-16199
|
ECS
Shivani Joshi and Srikrishnan Siva Subramanian

Landslide dams represent a major geomorphic hazard in the seismically active Himalayan belt, where temporary river blockages can lead to catastrophic outburst floods that impact downstream communities and infrastructure. Despite their importance, landslide dam susceptibility remains underexplored compared to conventional landslide hazard assessment. This study addresses this gap by developing a machine learning-based susceptibility model specifically targeting landslide dam formation, with the evaluation of spatial transferability between adjacent river basins. The following fifteen conditioning variables was compiled from diverse geospatial datasets: slope, aspect, elevation, plan curvature, relative relief, Topographic Wetness Index (TWI), distance to stream, distance to fault, distance to lineament, lithology, geomorphology, land use land cover (LULC), and median values of Normalised Difference Vegetation Index (NDVI), Normalised Difference Moisture Index (NDMI), and Normalised Difference Water Index (NDWI). A Random Forest (RF) classifier was implemented and trained exclusively on the Alaknanda basin and then applied to the neighbouring Bhagirathi basin for external validation, ensuring strict spatial separation between the training and test domains. The RF model achieved strong internal performance in the Alaknanda basin, and external validation in the Bhagirathi basin demonstrated robust transferability, with only modest performance degradation. Feature importance analysis revealed that elevation, NDMI, aspect and relative relief were the primary controls on dam formation. Susceptibility maps identified high-risk zones concentrated along deeply incised river valley segments, fault intersections, and areas underlain by high-grade metamorphic rocks. This susceptibility map may provide actionable information for disaster risk assessment, infrastructure planning, and the development of early warning systems in the Alaknanda–Bhagirathi river system and similar mountain regions worldwide.

How to cite: Joshi, S. and Siva Subramanian, S.: Landslide Dam Susceptibility Mapping in the Indian Himalayas: A Random Forest Approach with Cross-Catchment Validation, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-16199, https://doi.org/10.5194/egusphere-egu26-16199, 2026.

X3.114
|
EGU26-2691
Susceptibility modeling of hydro-morphological processes considered river topology
(withdrawn)
Nan Wang
Please check your login data.