HS1.2.2 | Advances in Model Inference, Diagnostics, Sensitivity, Uncertainty Quantification and Bayesian Approaches in Environmental Systems Models
EDI
Advances in Model Inference, Diagnostics, Sensitivity, Uncertainty Quantification and Bayesian Approaches in Environmental Systems Models
Convener: Thomas Wöhling | Co-conveners: Cristina PrietoECSECS, Jeremy Bennett, Anneli Guthke, Uwe Ehret, Cécile CoulonECSECS, Wolfgang Nowak
Orals
| Tue, 05 May, 08:30–12:30 (CEST)
 
Room 3.16/17
Posters on site
| Attendance Tue, 05 May, 14:00–15:45 (CEST) | Display Tue, 05 May, 14:00–18:00
 
Hall A
Posters virtual
| Fri, 08 May, 14:00–15:45 (CEST)
 
vPoster spot A, Fri, 08 May, 16:15–18:00 (CEST)
 
vPoster Discussion
Orals |
Tue, 08:30
Tue, 14:00
Fri, 14:00
To properly characterize uncertainty remains a major research and operational challenge in the Environmental Sciences. Uncertainty is inherent to many aspects of modelling as it impacts model structure development; parameter estimation; appropriate representation of data (for model input, calibration and evaluation); initial and boundary conditions; and hypothesis testing. Addressing these uncertainties is particularly important for predictive models used to support water management and decision making.

To address this challenge, methods that have proved to be very helpful include a) sensitivity analysis (SA) that evaluate the role and significance of uncertain factors (in the functioning of systems/models) and b) the closely-related methods for uncertainty analyses (UA) that seek to identify, quantify and reduce the different sources of uncertainty, as well as propagating them through a system/model.

This session invites contributions that discuss advances, both in theory and/or application, in Bayesian or frequentist methods and methods for SA/UA applicable to all Earth and Environmental Systems Models (EESMs), which embraces all areas of hydrology, such as classical hydrology, subsurface hydrology and soil science.

Topics of interest include (but are not limited to):
1) Novel methods for effective characterization of sensitivity and uncertainty including robust quantification of predictive uncertainty for model surrogates and machine learning (ML) models
2) Approaches to define meaningful priors for ML techniques in hydro(geo)logy,
3) Novel methods for spatial and temporal evaluation/analysis of models
4) The role of information and error on SA/UA as well as in evaluating model consistency and reliability
5) Novel approaches and benchmarking efforts for parameter estimation
6) Improving the computational efficiency of SA/UA (efficient sampling, surrogate modelling, parallel computing, model pre-emption, model ensembles, etc.)
7) Methods for detecting and characterizing model inadequacy
8) Problem formulation/decomposition and scripted workflows for prediction-focused modelling design
9) Cases studies on applied predictive modelling for decision support, management optimization under uncertainty and tools for communicating model results to stakeholders

Orals: Tue, 5 May, 08:30–12:30 | Room 3.16/17

The oral presentations are given in a hybrid format supported by a Zoom meeting featuring on-site and virtual presentations. The button to access the Zoom meeting appears just before the time block starts.
Chairpersons: Cristina Prieto, Cécile Coulon, Uwe Ehret
08:30–08:35
08:35–08:55
|
EGU26-22244
|
solicited
|
On-site presentation
Svenja Fischer

How process knowledge can improve flood forecasting: A stochastic and deterministic perspective (Invited)

Svenja Fischer

Hydrology and Environmental Hydraulics, Wageningen University & Research, Wageningen, the Netherlands

 

Flood prediction remains challenging. With changing climate and environment, predictions are becoming more important because the impacts are increasing, while at the same time, they are becoming more challenging because flood-generating mechanisms change. Floods can be triggered by different processes, such as heavy rain, long-duration rain or melting snow. With changing climate, these processes are expected to change in frequency and magnitude. However, in current flood prediction models, the different flood-generating mechanisms are not explicitly considered and all flood events are treated equally. While in stochastic hydrology, process knowledge has been shown to be able to improve flood estimation and reduce uncertainty, this is less well studied for physical models. This can introduce uncertainty into the estimation.

The first step is to identify the relationship between atmospheric and catchment characteristics, flood-generation processes and the flood hydrograph. The identified relations are then integrated in the hydrological models by directly tailoring the physical relations to each flood type. In combination with an dynamic weighting approach, this enables a non-stationary and flexible flood prediction that can capture the changing frequency and magnitude of flood types and provide different flood scenarios with assigned probabilities. This approach does not only reduce the error in flood peak prediction but also improves the link of the model parameters to physical processes and thus increases our understanding of flood processes. Moreover, the uncertainty of the considered process can be directly quantified by a probability-based evaluation.

How to cite: Fischer, S.: How process knowledge can improve flood forecasting: A stochastic and deterministic perspective (Invited), EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-22244, https://doi.org/10.5194/egusphere-egu26-22244, 2026.

08:55–09:05
|
EGU26-18152
|
ECS
|
On-site presentation
Muhammad Hammad, Rajeshwar Mehrotra, and Ashish Sharma

Hydrological models often fail to fully capture catchment responses, which leaves systematic residuals between modelled and observed flow. Here, we present a comprehensive state-dependent, two-stage residual modelling framework that separates residuals into modelling error, arising from structural and parametric limitation, and observation error, coming from measurement and forcing data uncertainty. The residuals are estimated conditional on hydrological states, allowing the error dynamics to vary across flow regimes rather than assuming stationarity. The residual model is trained on 124 CAMELS-AU catchments from diverse climatic regions across Australia and is tested on independent catchments from the continent. The results demonstrate improved correction across all flow regimes, specifically for high (>95th percentile) and extremely high flow (>99th percentile). To enhance the generalizability, Minimum Redundancy Maximum Relevance (mRMR) feature selection is employed to identify the most important catchment attributes, which are used as static inputs alongside the dynamic model states and hydrological forcings. The framework is applicable to fully calibrated, partially calibrated, and uncalibrated hydrological models, and remains effective under limited or absent streamflow data. By explicitly modelling residuals as separable state-dependent processes, the proposed framework provides a robust method for improved streamflow correction, with particular relevance for peak flow estimation and applications in data-scarce environments. 

How to cite: Hammad, M., Mehrotra, R., and Sharma, A.: Modelling Hydrologically-Conditioned Residuals For Improved Error Correction Across Different Flow Regimes , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-18152, https://doi.org/10.5194/egusphere-egu26-18152, 2026.

09:05–09:15
|
EGU26-12725
|
ECS
|
On-site presentation
Omar Cenobio-Cruz and Giuliano Di Baldassarre

Understanding how uncertainty propagates in hydrological modelling, from precipitation inputs to streamflow simulations, is essential for improving model sensitivity and strengthening the interpretation of model outputs. While gridded precipitation products are increasingly developed and widely used in different applications worldwide, their impacts on the uncertainty of simulated streamflows remain largely unexplored. In this work, we tested a variety of state-of-the-art precipitation datasets to explore how their uncertainty cascades through a process-based hydrological model and influences streamflow predictions using the Reno River basin in Italy as a case study. Our results indicate that, although precipitation patterns are broadly consistent across datasets, substantial differences emerge at seasonal and annual scales especially in complex terrains. Moreover, precipitation uncertainties are propagated and also amplified to the streamflow, on average 3.5 times for the dry season. The opposite occurs for the wet season, where uncertainty slightly decreases. The subsequent analysis reveals that the influence of precipitation uncertainty differs among subbasins. As such, our work emphasises the substantial impact of precipitation forcing in hydrological modelling and the significance of evaluating and quantifying uncertainty propagation

How to cite: Cenobio-Cruz, O. and Di Baldassarre, G.: From precipitation datasets to streamflow simulations: Tracing the propagation of uncertainty in hydrological modelling, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-12725, https://doi.org/10.5194/egusphere-egu26-12725, 2026.

09:15–09:25
|
EGU26-18127
|
ECS
|
On-site presentation
Mara Ruf and Daniel Straub

The optimization of flood risk mitigation measures necessitates the estimation of flood risk without and with a wide range of combinations of mitigation measures. This flood risk estimation is a complex task, involving climate, hydrological, hydraulic and economic process components. Moreover, these components do not form a linear process chain but interact - for example through local protection measures and potential flood protection failures. Additionally, flood risk assessment is accompanied by significant natural and model uncertainties.

We developed a probabilistic model capable of efficiently estimating the flood risk at river scale [1]. It explicitly models the interplay among flood process components and mitigation measures, making it well suited to estimate the benefit of individual mitigation measures or combinations thereof. In the latter case, the model captures the joint effect of mitigation measures, rather than summing their independent benefits.

Decision making in flood risk management involves numerous possible mitigation measures, multiple conflicting objectives (such as minimizing costs versus maximizing risk reduction), and generally very high uncertainties. In this contribution, we present how the flood risk model can support decision making for the selection of flood mitigation measures. Identifying pareto-optimal combinations of mitigation measures at river scale poses different challenges: a combinatorially large design space, partly discrete optimization variables, substantial natural and model uncertainties and large variance in the sample-based risk estimates. We present an optimization framework tailored for these settings, which balances between the robustness of sample-based flood estimates and the convergence behavior of the optimization given computational constraints. 

How to cite: Ruf, M. and Straub, D.: Optimizing flood risk mitigation measures at river scale, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-18127, https://doi.org/10.5194/egusphere-egu26-18127, 2026.

09:25–09:35
09:35–09:45
|
EGU26-4054
|
ECS
|
On-site presentation
Seth N. Linga, Carmen Aguiló-Rivera, Olivia Richards, Samuel Flinders, Joshua Larsen, Michela Massimi, Giovanni De Grandis, and Arnald Puy

Estimates of irrigation water withdrawals (IWW) by global irrigation models (GIM) depend on assumptions about which features (irrigated lands, crop types, schedules and water sources) are represented and how they are formalised. While uncertainty and sensitivity analyses (UA/SA) routinely interrogate parameter uncertainty, many assumptions are qualitative or pragmatic, resist numerical characterisation and thus lie beyond conventional quantitative approaches.

We addressed this gap by subjecting 100 irrigation modelling assumptions, drawn from c. 50 papers, to an expert elicitation process involving eleven scientists and five irrigators. Experts were asked to rank the assumptions in terms of influence and then assess the first ten based on their situational limitations, plausibility, choice space, peer agreement and influence on model outputs.

Scientists identified irrigated area, irrigation efficiency and water availability, often represented by single datasets, as primary drivers of global IWW estimates. Both scientists and irrigators judged these assumptions to be highly influential with weak pedigree (quality of the knowledge base), exhibiting limited empirical support, derivation under practical constraints, multiple plausible alternatives, and low peer agreement, and thus placing them in the NUSAP "danger zone". 

By linking assumption influence with epistemic strength, this study extends conventional UA/SA, demonstrating that extended peer engagement can reveal overlooked uncertainties, enhance transparency and strengthen the robustness of global IWW assessments under deep uncertainty.

How to cite: Linga, S. N., Aguiló-Rivera, C., Richards, O., Flinders, S., Larsen, J., Massimi, M., De Grandis, G., and Puy, A.: Extended peer community finds limited epistemic justification for key assumptions in global irrigation models, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-4054, https://doi.org/10.5194/egusphere-egu26-4054, 2026.

09:45–09:55
|
EGU26-8668
|
ECS
|
On-site presentation
Joo Ho Lee, Dae Hee Cho, Dong Gun Kim, June Wee, Sang Chul Lee, and Jung A Lee

Climate change and rapid urbanization are reshaping insect phenology and spatial occurrence patterns, leading to increasingly frequent nuisance insect outbreaks in urban environments. In dense metropolitan areas, sudden mass emergence of nuisance insects-such as mayflies, red-backed lovebugs, and non-biting midges-can cause sanitation concerns, disruption of urban infrastructure, and surges in public complaints, placing growing pressure on local environmental management. Despite these challenges, most current management practices remain reactive, relying on complaint-driven responses after outbreaks occur, which limits timely and efficient allocation of monitoring and management resources.

This study presents a predictive environmental modelling framework designed to support municipal decision-making by forecasting monthly nuisance insect outbreak risk at the district (gu) level in Seoul, South Korea. Rather than pursuing nationwide prediction, the study focuses on a single metropolitan system where environmental heterogeneity, administrative demand, and operational feasibility are closely aligned. By fixing the spatial analysis unit at the municipal district level, the framework delivers risk information directly compatible with urban monitoring plans, prioritization of management efforts, and allocation of limited resources.

A GIS-based spatial database was constructed by integrating nuisance insect occurrence history derived from citizen-science platforms and open biodiversity databases, monthly climate variables (mean temperature and cumulative precipitation), and district-level land cover composition. Occurrence records were subjected to quality control procedures, including coordinate validation and spatial de-duplication, and aggregated into monthly district-level counts as a proxy for outbreak intensity. Climate predictors were selected for interpretability and relevance to insect life cycles, while land cover metrics emphasized water and wetland areas, green spaces, and urbanized land.

To characterize baseline spatial tendencies, species distribution modelling was applied to derive habitat suitability indices for each target taxon. These indices were incorporated as auxiliary predictors to support interpretation of spatial risk patterns rather than serving as standalone forecasts. A predictive model integrating climate variables, land cover composition, occurrence history, and habitat suitability indices was then developed to estimate one-month-ahead outbreak risk scores for each district. Continuous risk scores were translated into ordinal risk classes using objective threshold rules to facilitate interpretation and identification of priority districts.

The predicted results indicate clear seasonal and spatial heterogeneity in outbreak risk across Seoul. Elevated risk tends to concentrate within specific seasonal windows, while district-level patterns vary according to local environmental conditions. Species-specific differences suggest that the relative importance of spatial drivers differs among taxa, with some showing stronger associations with water-related land cover and others responding more strongly to urban–green space configurations.

By delivering interpretable, one-month-ahead risk information at an administrative scale, the proposed framework provides a practical basis for shifting nuisance insect management from reactive responses toward anticipatory, risk-informed planning. The workflow is implemented as a reproducible, GIS-based pipeline that can be updated as new climate or occurrence data become available, demonstrating how predictive environmental modelling can function as an operational decision-support tool for urban environmental management under increasing climate and ecological uncertainty.

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Education (RS-2021-NR060142).

How to cite: Lee, J. H., Cho, D. H., Kim, D. G., Wee, J., Lee, S. C., and Lee, J. A.: A Predictive Environmental Modelling Framework for Decision Support:Monthly Municipal-Level Forecasting of Nuisance Insect Outbreak Risk in Seoul, South Korea, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-8668, https://doi.org/10.5194/egusphere-egu26-8668, 2026.

09:55–10:05
|
EGU26-15704
|
On-site presentation
Darri Eythorsson and the Comphyd team

Key challenges in hydrologic modelling include characterizing and propagating uncertainty, diagnosing model sensitivity, and conducting robust hypothesis testing. Traditionally, hydrologic model workflows rely on fragmented and bespoke scripts that obscure key sources of uncertainty, confound model performance evaluation, and impede reproducibility. We present SYMFLUENCE, an open-source framework that operationalizes end-to-end hydrological simulations, from conceptualisation, through model compilation and data processing, to calibration and visualisation, by integrating models, data, and computation into a coherent, reproducible system architecture. 

SYMFLUENCE provides a modular pipeline spanning the full hydrological modelling lifecycle: domain definition and watershed delineation, sub-basin and hydrological response unit (HRU) discretization, multi-source data acquisition and preprocessing, model input preparation, model instantiation, parameter estimation, multi-objective evaluation, and visualization. Each stage offers interchangeable components—users can select among delineation tools (TauDEM, pysheds) and existing geofabric producs such as, MERIT-Basins and TDX Hydro, forcing datasets (e.g., ERA5, CERRA, CARRA,, AORC, etc.), discretization schemes, model structures, calibration algorithms, evaluation metrics, etc. while the framework manages technical execution. This separation of concerns allows researchers to express scientific choices (e.g., spatial resolution, process representation, objective functions) independently from their computational implementation, reducing the cognitive burden of workflow orchestration and enabling systematic comparison of modelling decisions. 

At its core, SYMFLUENCE employs a declarative YAML specification to define entire modelling experiments—including parameter bounds, sampling strategies, objective functions, and multi-criteria evaluation metrics. This design directly addresses reproducibility challenges by ensuring that modelling decisions are consistent and transparent across computing environments. The framework supports model-agnostic ensemble generation for uncertainty quantification and model optimisation across diverse model structures and model ecosystems (SUMMA, FUSE, GR4J, HYPE, NextGen, and others), automated provenance capture, and efficient parallel execution for large-domain sensitivity experiments. 

We present applications spanning single-basin calibration to continental-scale ensemble analyses. These cases illustrate how SYMFLUENCE enables rigorous benchmarking of model performance by holding computational infrastructure constant, allowing structural uncertainty to be isolated from implementation artifacts. By bridging technical infrastructure and scientific inference, SYMFLUENCE enables systematic and transparent exploration of alternative modelling options.  

 

How to cite: Eythorsson, D. and the Comphyd team: On the Architecture of Integration in Hydrological Modelling — Orchestrating Reproducible, Scalable, and Transparent Workflows with SYMFLUENCE , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-15704, https://doi.org/10.5194/egusphere-egu26-15704, 2026.

10:05–10:15
Coffee break
Chairpersons: Thomas Wöhling, Anneli Guthke, Wolfgang Nowak
10:45–10:50
10:50–11:10
|
EGU26-14866
|
solicited
|
Highlight
|
On-site presentation
Saman Razavi, Banamali Panigrahi, and Hamed Abbasnezhad

The recent passing of Ilya M. Sobol’ marks the loss of one of the most influential figures in the development of global sensitivity analysis (GSA). Sobol’s work fundamentally shaped how uncertainty in model outputs is attributed to uncertain inputs, providing a rigorous and widely adopted framework that has become a cornerstone of uncertainty and sensitivity analysis across the Earth, environmental, and hydrological sciences.

This contribution first offers a brief tribute to Sobol’s scientific legacy and a concise review of the conceptual foundations of GSA. We revisit the primary question that motivated Sobol’s work—How much of the uncertainty in the model output is caused by each uncertain input?—and discuss why this question remains central for the analysis of complex, nonlinear, and high-dimensional models. We also emphasize that the principles underpinning GSA are increasingly relevant in the context of artificial intelligence (AI), where complex and high-dimensional models demand robust and transparent methods for attributing influence and uncertainty.

Building on this foundation, we highlight recent developments around variogram analysis of response surfaces (VARS), and in particular X-VARS, which extend GSA concepts to settings relevant for explainable AI (XAI). By leveraging paired perturbations and scale-explicit analysis, X-VARS enables efficient and robust attribution of uncertainty and influence in complex models, making GSA practical for modern AI-driven applications. Compared to established explainability methods such as SHAP, X-VARS offers substantial gains in computational efficiency while providing diagnostically richer insight into nonlinearity, interactions, and scale dependence.

We conclude by highlighting some key challenges and opportunities for the next generation of GSA methods in complex modelling and AI applications.

How to cite: Razavi, S., Panigrahi, B., and Abbasnezhad, H.: Ilya M. Sobol’ (1926–2025): A Tribute and Overview of the Foundations of Global Sensitivity Analysis, Recent Advances, and Extensions toward Explainable Artificial Intelligence, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-14866, https://doi.org/10.5194/egusphere-egu26-14866, 2026.

11:10–11:20
|
EGU26-5846
|
ECS
|
Virtual presentation
Zhonghao Zhang and Caterina Valeo

Infiltration is a quantitative expression of water loss in the urban hydrological cycle, but current hydrological models often use empirical or semi-empirical equations. The inherent uncertainties in these equations (often simplifying boundary conditions or water content expression) are not accurately conveyed to end users via hydrological models. As well, these empirical equations introduce model-form uncertainty that is often ignored before model calibration. This research focuses on analyzing the uncertainty in the infiltration process by constructing a quantitative framework based on uncertainty propagation from a true, physical model (Richards Equation) to conceptually simpler models (Green-Ampt and Horton’s model) that uses entropy to track the uncertainty’s magnitude change. Firstly, we conducted sensitivity analyses using various designed rainfalls (time series as well as IDF curve) in a watershed over varying spatial-temporal scales to isolate the uncertainty propagation in the infiltration equations arising from different spatial-temporal scales. This uncertainty propagation framework for infiltration answers the question of how changes in the structural assumptions of the infiltration equation affect peak flowrate errors or volume estimation errors. It adopts entropy as a quantitative index to describe the amount of information loss in the infiltration process, as well as how the uncertainty propagates over time and space. Furthermore, this entropy uncertainty framework can help in decision making related to when a more physically-based approach must be used, or when a simplified equation is still acceptable.

How to cite: Zhang, Z. and Valeo, C.: Entropy-Based Quantification of Infiltration Model-Form Uncertainty, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-5846, https://doi.org/10.5194/egusphere-egu26-5846, 2026.

11:20–11:30
|
EGU26-20968
|
ECS
|
On-site presentation
Pablo De Weerdt, Stijn Luca, and Ellen Van De Vijver

Practical constraints often force modellers to rely on legacy data rather than targeted new data collection relying tailored sampling design for subsurface modelling. While these pre-existing datasets enable model development and gap identification, their spatial density and distribution may not always meet the desired resolution or precision. Consequently, strategic subsampling for calibration and validation is essential to ensure a robust and accurate performance assessment of the resulting models. While cross-validation techniques are commonly applied to maximize data utility, their application in spatial modelling yields overoptimistic performance estimates with high variance, particularly when data are clustered. Probabilistic-based sampling is known to tackle bias, but its effectiveness remains poorly understood for spatially sparse and clustered legacy data.
This research evaluates the impact of subsampling methods on the validation of spatial interpolation techniques. Conditional versus random subsampling is compared for different subsample sizes in terms of actual model performance with particular attention to geostatistical concepts that additionally take into account spatial autocorrelation within subsurface data. Legacy boreholes spanning over a century with sparse and clustered spatial distribution were queried to model peat content in 3D. Conditioning relied on 2D legacy attributes such as age, spatial coordinates, and target feature statistics. We also investigated how the complexity of spatial variation (represented in different models with varying anisotropic autocorrelation) influenced performance by populating the existing borehole configuration with three 3D target features: two more spatially continuous synthetic and one heterogeneous, real field dataset. First results suggest that variance of validation results reduced exclusively in the heterogeneous case, provided the validation subset was large enough (35%) to incorporate the cumulative peat content within a borehole as a 2D attribute. These results underscore the resilience of conditioned probabilistic subsampling over alternative validation methods for legacy-based modelling.

How to cite: De Weerdt, P., Luca, S., and Van De Vijver, E.: Conditional Subsampling of Legacy Boreholes for Subsurface Model Validation, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-20968, https://doi.org/10.5194/egusphere-egu26-20968, 2026.

11:30–11:40
|
EGU26-12048
|
ECS
|
On-site presentation
Zhaokai Dong, Pradeep Goel, and Clare Robinson

Event Mean Concentration (EMC) is widely used to estimate stormwater pollutant loads due to its simplicity and low data requirements. However, conventional deterministic approaches typically use a single representative EMC for a catchment to estimate loads, thereby neglecting temporal variability across seasons and storm events, and potentially biasing event-level load estimates. To address this limitation, we present a Bayesian hierarchical linear mixed model that explicitly quantifies seasonal and inter-event variability in EMCs by estimating full posterior distributions, rather than a single deterministic EMC value, while retaining the operational simplicity of the traditional EMC approach. The model decomposes EMC into three hierarchical components: a global fixed effect, a seasonal random effect, and an event-level random effect. This structure enables EMC variability to be partitioned across multiple temporal scales and propagated into predictive uncertainty. The approach is demonstrated using soluble reactive phosphorus (SRP) data from a mixed urban catchment in London, Canada, comprising 18 monitored storm events across summer and fall seasons. A suite of models is developed to systematically evaluate methodological choices, including models with increasing levels of EMC variability representation (from global-only to full hierarchical structures) and models considering alternative land-use representations (lumped versus distributed). Results indicate that the full hierarchical model consistently outperforms simplified structures that exclude key variability components, as evaluated using leave-one-out cross-validation (LOO). Models that explicitly represent distinct land-use types demonstrate improved predictive performance compared to lumped representations; however, spatial disaggregation increases marginal variance, reflecting additional uncertainty. For the full hierarchical, land-use-distributed model, seasonal-level effects account for the largest share of marginal variability (median 63%), indicating that EMC variability for SRP manifests mainly as seasonal changes. Overall, these findings demonstrate that a single representative EMC is insufficient to characterize intrinsic temporal variability. By explicitly propagating uncertainty across hierarchical levels, the proposed Bayesian framework improves the reliability of stormwater load predictions and provides a more robust basis for management decisions.

How to cite: Dong, Z., Goel, P., and Robinson, C.: A Bayesian Hierarchical Model to Simulate Temporal Variability in Urban Stormwater Event Mean Concentrations, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-12048, https://doi.org/10.5194/egusphere-egu26-12048, 2026.

11:40–11:50
11:50–12:00
|
EGU26-1947
|
ECS
|
On-site presentation
José Augusto Zevallos Ruiz

Two-dimensional (2D) hydraulic models are essential tools for flood hazard assessment, yet their calibration remains computationally demanding and strongly constrained by data availability. This study presents a Bayesian calibration framework that integrates a convolutional neural network (CNN) surrogate model to efficiently infer spatially distributed Manning’s roughness coefficients while explicitly accounting for model structural uncertainty.

The approach is applied to a reach of the Lower Piura River (Peru), a flood-prone basin characterized by limited in situ observations. An ensemble of TELEMAC-2D simulations is generated using Latin Hypercube Sampling over multiple roughness configurations, and a CNN is trained to emulate spatial water depth fields with high fidelity. To focus learning on hydraulically relevant regions, a weighted loss function based on roughness–depth sensitivity is employed.

The trained emulator is embedded within a Bayesian inference scheme that incorporates a Gaussian Process discrepancy term to represent systematic model–reality deviations. Posterior distributions of Manning’s coefficients and uncertainty parameters are estimated using Markov Chain Monte Carlo sampling. Synthetic experiments demonstrate accurate parameter recovery in hydraulically sensitive areas, while a real-case application based on optical satellite imagery confirms the method’s ability to reproduce observed flood depth patterns under data scarcity.

The proposed framework significantly reduces computational cost compared to conventional calibration approaches and provides a probabilistic characterization of parameter uncertainty. These results highlight the potential of CNN-based surrogate models as scalable tools for Bayesian inference in large-scale hydraulic modeling and flood risk assessment.

How to cite: Zevallos Ruiz, J. A.: Bayesian calibration of a 2D hydraulic model using a CNN-based surrogate emulator under data-scarce conditions, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1947, https://doi.org/10.5194/egusphere-egu26-1947, 2026.

12:00–12:10
|
EGU26-3699
|
ECS
|
On-site presentation
Hu Haoxin

It is unrealistic to build independent alternative models to constitute the model space of Bayesian model aver­ aging (BMA) in groundwater/surface water modeling. Using uniform prior weights can lead to overweighting models with similar structures, as well as biased posterior model weights and BMA predictions. This study applied a correlation matrix R to measure the correlations among alternative models. And two weighting schemes based on R, namely the cos-square (CS) and capped eigenvalue (CE), were used to dilute models’ prior weights. Additionally, the effective model number (Neff) metric derived from R was proposed to measure the effectiveness of BMA model set. Based on two real-world cases (snowmelt runoff modeling and groundwater modeling), and a synthetical groundwater case, we validated the importance of prior weight dilution and the important value of the R-based methods in improving BMA prediction. The results demonstrated that the prior weight dilution schemes redistribute models’ prior weights by penalizing highly correlated models while rewarding those with relatively independent structures. The BMA predictive performance is improved using the weight dilution schemes, with the CS scheme outperforming the CE scheme. In addition, the correlation matrix provides insight into the rationality of the model structures in the BMA model set. The metric of Neff can serve as an effective tool for quantifying the effectiveness of the model set, which provides an important reference for updating the model set and improving BMA predictions with prior weight dilution schemes.

How to cite: Haoxin, H.: Prior weight dilution in Bayesian model averaging for groundwater modeling, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3699, https://doi.org/10.5194/egusphere-egu26-3699, 2026.

12:10–12:20
|
EGU26-1851
|
ECS
|
On-site presentation
Antoine Di Ciacca, Hugo Delottier, Marie Coudène, Tom Narth, Landon Halloran, Géraldine Bullinger, and Philip Brunner

Mechanistic soil water and heat transport models are important tools for quantifying and predicting exchange fluxes between the atmosphere, vegetation, soils, and aquifers. For instance, these models are essential for assessing how soils help mitigate flood and heatwave risks, particularly in urban environments. Mechanistic soil water and heat transport models rely on parameters characterising the soils' hydraulic and thermal properties. The estimation of these parameters through inverse modelling and the quantification of associated uncertainties is challenging due to the non-linear nature of the processes and computational demands of the simulations. However, recent advances in inverse modelling algorithms such as Iterative Ensemble Smoothers (IES) allow us to handle highly parameterised, non-linear models while keeping the number of model runs relatively small (~1000).  These advances not only keep the computational demand for inverse modelling and parameter estimation tractable, but in addition allow for the quantitative assessment of data worth of available or planned observations, which can be used to increase the efficiency of experimental designs. In this work, we tested a Levenberg-Marquardt form of IES, a relatively novel method increasingly used in reservoir and groundwater modelling, with a mechanistic soil water and heat transport model. We firstly generated reference values of soil moisture, temperature and fluxes using three synthetic models representing different characteristic soil profiles with contrasting parameter values. We simulated a calibration period representing our planned field experiments, consisting of two infiltration tests with warm and cold water, and a prediction period including a heat wave and an extreme rainfall event. Secondly, we used the IES algorithm to history-match the model-generated “observations” of soil moisture and temperature at six depths for the calibration period. We finally evaluated the algorithm's ability to estimate the reference parameters, as well as predict soil moisture, temperature, and fluxes. The results show that the posterior distributions obtained with the IES algorithm are consistent with the reference values for all parameters and predictions considered. Furthermore, the relatively small number of runs required (< 10,000) allowed us to perform parameter estimation and uncertainty quantification across different experimental scenarios, thereby quantifying their data worth and optimising our experimental design. The synthetic approaches formed the basis for simulating water and heat transport in three real-world urban soils and assessing the worth of temperature measurements in tracing water and heat fluxes. IES algorithms have strong potential to become standard tools for vadose zone modelling, and the insights gained from our study offer a solid foundation for their effective application.

How to cite: Di Ciacca, A., Delottier, H., Coudène, M., Narth, T., Halloran, L., Bullinger, G., and Brunner, P.: Parameter estimation, uncertainty analysis and data worth assessment of soil water and heat transport models using an Iterative Ensemble Smoother, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1851, https://doi.org/10.5194/egusphere-egu26-1851, 2026.

12:20–12:30

Posters on site: Tue, 5 May, 14:00–15:45 | Hall A

The posters scheduled for on-site presentation are only visible in the poster hall in Vienna. If authors uploaded their presentation files, these files are linked from the abstracts below.
Display time: Tue, 5 May, 14:00–18:00
Chairpersons: Thomas Wöhling, Cristina Prieto, Cécile Coulon
A.1
|
EGU26-14178
Jeremy White

The traditional groundwater modeling approach of manual calibration with a handful of parameters and ad hoc one-at-a-time sensitivity analysis is giving way to formal data assimilation and uncertainty estimation, where the natural very-high dimensionality of the inverse problem is embraced.  In theory, this is an improvement for applied groundwater modeling, and, more importantly, the management of groundwater resources.  However, this transition is not without hardship.  Many new concepts, skills, and techniques must be learned to effectively and efficiently assimilate many kinds of information and to ultimately provide robust estimates of predictive uncertainty in an applied groundwater modeling setting, where time and budget pressures are real. 

This talk will present some foundational concepts surrounding predictive groundwater modeling, including the roles of model complexity, data, and uncertainty. The talk will include discussion of apparent trends in the groundwater modeling industry, with a few examples of modern applied predictive groundwater modeling.

How to cite: White, J.: Predictive groundwater modeling and uncertainty estimation in practice, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-14178, https://doi.org/10.5194/egusphere-egu26-14178, 2026.

A.2
|
EGU26-3076
|
ECS
Stefania Scheurer, Riccardo Frenner, Tim Brünnette, Sergey Oladyshkin, and Wolfgang Nowak

The Finite‑Volume Neural Network (FINN) merges the rigor of classical numerical discretizations with the flexibility of artificial neural networks (ANNs) to uncover unknown terms or parameters in partially unknown partial differential equations (PDEs). While this hybrid framework enhances flexibility and interpretability, the highly parameterized ANN makes uncertainty quantification (UQ) of the identified PDE components both demanding and computationally expensive, especially when conventional Bayesian approaches rely on costly Markov Chain Monte Carlo sampling. To address this, we introduce a computationally efficient, Machine Learning (ML)‑assisted inference‑with‑UQ scheme that yields confidence intervals for the PDE components learned by FINN. The procedure consists of data‑driven bootstrapping of the available observations and repeated training of FINN on each resampled set. We illustrate the method on the retardation factor of a diffusion‑sorption PDE, showing that it produces trustworthy interval estimates while markedly lowering the computational burden.

How to cite: Scheurer, S., Frenner, R., Brünnette, T., Oladyshkin, S., and Nowak, W.: Efficient Uncertainty Quantification for Physics-Aware Machine Learning of Diffusion-Sorption Models, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3076, https://doi.org/10.5194/egusphere-egu26-3076, 2026.

A.3
|
EGU26-4306
|
ECS
Mert Çetin Ekiz, Maria Clementina Caputo, Lorenzo De Carlo, Antonietta Celeste Turturro, Manuel Sapiano, Luke Galea, and Michael Schembri

In coastal aquifers, a main objective of groundwater management is often to determine sustainable pumping rates that avoid seawater intrusion and well salinization. Models can be used to understand and forecast the behavior of such aquifers. A necessary step for modeling is calibration, and such a process contains uncertainty. Understanding how uncertainties affect water management is crucial for providing water utilities with the necessary information about well salinization risk. Uncertainty in the model can be reduced by new observation types and locations. One of the most common observation types is groundwater head observation in an observation well. However, the number of observation wells is limited by constraints, such as budget. A FOSM approach was chosen for data worth analysis considering parameter and observation uncertainty. It was implemented in a real-world island aquifer in the Pwales CA (Malta). The flow model was developed using MODFLOW6, and Linear Uncertainty analysis was done via PEST. FOSM was conducted, aiming to find optimum groundwater head observation locations to reduce forecast uncertainty. The workflow used scripts for reproductibility and it adapted to high-dimensional model.

How to cite: Ekiz, M. Ç., Caputo, M. C., De Carlo, L., Turturro, A. C., Sapiano, M., Galea, L., and Schembri, M.: Utilizing Linear Uncertainty Analysis for Potential Observation Well Locations in the Pwales Aquifer, Malta, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-4306, https://doi.org/10.5194/egusphere-egu26-4306, 2026.

A.4
|
EGU26-5697
|
ECS
Zoé Petitjean, Alexandre Pryet, Olivier Atteia, Marc Kham, Mathieu Couplet, Raphaël Lamouroux, and Frédéric Lalbat

Connected, highly permeable subsurface features act as preferential flow paths and strongly influence solute transport. Transport models are frequently used for risk assessment and mitigation and should properly account for these connected structures to provide unbiased and informative predictions.

The parameterization of hydraulic properties typically relies on multi-Gaussian distributions and poorly integrate geological knowledge in the prior. They allow for a flexible and efficient data assimilation of process variables (heads, concentrations), but may lead to unrealistic geology and insufficient description of connectivity. Alternatively, detailed descriptions of heterogeneities can be obtained with advanced geostatistical methods, such as multiple-point statistics. They account for geological knowledge and can be conditioned to geological or geophysical data. Unfortunately, they are hardly compatible with a data assimilation process and therefore usually fail to match observed data.

To address this issue, we compared several approaches capable of integrating both geological knowledge and observation data. We employed different parameterization strategies by using pilot points (de Marsily, 1978), adopting a facies-like representation of the subsurface with truncated pluri-Gaussian simulations (Matheron et al., 1987), or inserting structures whose positions can be adjusted during parameter estimation (Khambhammettu et al., 2020). We also implemented data space inversion (Delottier et al, 2023), which bypasses parameter estimation and focuses on the link between observations and forecasts.

The approaches are tested with a transport model considering a contaminant migration scenario in a synthetic alluvial aquifer with permeable channels. The predictions of interest are the mass flow, peak time, and total mass of contaminant reaching the river. Results show that there is a compromise to find between a simple but effective parameterization, which can struggle to represent connectivity, and a more detailed one, which is more difficult to make consistent with observations.

How to cite: Petitjean, Z., Pryet, A., Atteia, O., Kham, M., Couplet, M., Lamouroux, R., and Lalbat, F.: Adding geological knowledge in transport models to improve their predictive capacity for risk mitigation in heterogeneous aquifers, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-5697, https://doi.org/10.5194/egusphere-egu26-5697, 2026.

A.5
|
EGU26-7856
Mariaines Di Dato, Maria Grazia Zanoni, Diego Avesani, Filippo Di Marco, and Alberto Bellin

Hydrological models are a fundamental tool for supporting water resources management; yet, model predictions are often affected by high uncertainty. Among the other sources of uncertainty, snow-dominated catchments must also cope with modeling the snow accumulation and melting processes. Snow controls discharge by storing winter precipitation and releasing it during melt periods, thereby regulating the timing and magnitude of the streamflow. 

A widely used approach for representing snow processes is the degree-day model, which estimates snow accumulation and melt based on air temperature thresholds and a melting factor. Due to the scarcity of in situ snow observations, the degree-day parameters are commonly inferred by calibrating the discharge within a hydrological model, thereby exacerbating parameter equifinality.

In this study, we quantify the impact of constraining degree-day model parameters with MODIS, a multispectral satellite sensor that provides near-daily global observations of snow cover extent. The constrained calibration framework results in an overall improvement in discharge performance and a significant reduction in parameter uncertainty, including for the non-snow-related parameters of the other model. These results underscore the importance of integrating satellite-based snow information to mitigate equifinality and enhance the robustness of hydrological modeling in alpine environments.

How to cite: Di Dato, M., Zanoni, M. G., Avesani, D., Di Marco, F., and Bellin, A.: Reducing parameter uncertainty in hydrological modeling using MODIS-derived constraints, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-7856, https://doi.org/10.5194/egusphere-egu26-7856, 2026.

A.6
|
EGU26-8930
|
ECS
Miyu Kajita, Mariko Saito, Yasuhiro Tawara, Shun Kurihara, and Hidenori Okamoto

For beverage manufacturers that utilize local water resources, it is crucial to understand future physical water risks caused by climate change to ensure business continuity and appropriate disclosure to investors, customers, and other stakeholders. Recently, free and widely used water risk screening tools have become available, such as Aqueduct 4.0 (WRI, 2023) and Water Risk Filter (WWF, 2024). However, because of their global-scale design, they may not accurately represent site-specific hydrological water cycle processes, including local water use conditions and surface water-groundwater interactions. Thus, their outputs may not be consistent with historical experience or local perceptions.

Watershed modeling is a useful tool for representing local hydrological water cycle processes and quantitatively evaluating future water risks. Nevertheless, watershed model parameters are inherently uncertain, and future prediction simulations with a single parameter set may lead to either underestimation or overestimation of water risk metrics. Therefore, in order to assess water risk more effectively, it is necessary to develop a framework that can identify feasible parameter combinations (multiple solutions) while taking into account parameter uncertainty.

In this study, we developed a watershed model that considers parameter uncertainty to quantify future physical water risks in the Phu Sai River Basin, Rayong Province, Thailand (approximately 280 km2), which is a concern due to the increase in water risks based on Aqueduct 4.0. A watershed modeling tool GETFLOWS was applied, which can simulate surface water and groundwater flow simultaneously.

The required data for watershed modeling was classified into hydrological observations, meteorology, land use/land cover, topography, geology, and water use. Our primary source of data was public data, including global datasets for meteorology, land use/land cover, and topography, as well as Thai government datasets for geology and water use. Regarding hydrological observation data used for model validation, in addition to existing data released by the Thai government, field measurements of river discharge and groundwater levels were conducted to improve model accuracy. Model performance for 2015–2025 was evaluated using the Nash–Sutcliffe efficiency (NSE), root mean square error (RMSE), and correlation coefficient.

The first step was to identify a parameter set that can accurately reproduce the hydrological observations through manual calibration. The next step was to create realistic parameter ranges and conduct sensitivity analyses to extract parameters that have a significant impact on simulated river discharge and groundwater levels. For the selected parameters, we generated 100 parameter combinations using Latin hypercube sampling and ultimately identified two parameter sets that showed high agreement with hydrological observations based on the NSE, RMSE, and correlation coefficient. Obtaining multiple solutions could help us evaluate the spread of future water risk predictions caused by parameter uncertainty.

In future work, we plan to conduct future prediction simulations using climate projection datasets such as NEX-GDDP-CMIP6 v2.0 (NASA, 2025) and to quantitatively evaluate future physical water risks based on risk assessment metrics such as required river discharge and groundwater level thresholds.

How to cite: Kajita, M., Saito, M., Tawara, Y., Kurihara, S., and Okamoto, H.: Watershed modeling considering parameter uncertainty to evaluate future physical water risks, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-8930, https://doi.org/10.5194/egusphere-egu26-8930, 2026.

A.7
|
EGU26-16031
|
ECS
Sanggon Jeong, Hyunho Jeon, Wanyub Kim, Junhyuk Jeong, and Minha Choi

Rapid urbanization and the intensification of extreme rainfall have increased urban flood risk, thereby strengthening the demand for hydrological modeling to support the design and operation of urban drainage systems. The U.S. EPA Storm Water Management Model (SWMM) has been widely adopted for urban rainfall–runoff simulation. However, reliable application has been hindered by the need to calibrate numerous site-specific parameters that represent catchment and drainage-network characteristics. Trial-and-error calibration has been time-consuming and difficult to reproduce, whereas evolutionary algorithm-based auto calibration has often required thousands of model evaluations and can be computationally prohibitive. Although machine learning-based surrogate calibration and Bayesian optimization (BO) have been explored to reduce computational burden, SWMM auto calibration that incorporates dimensionality reduction for multi-site, multi-event water-level time series has remained limited. This study proposes a hybrid auto-calibration framework integrating Principal Component Analysis (PCA), Light Gradient Boosting Machine (LightGBM), and Gaussian process-based BO for multi-site, multi-event water-level calibration. Key parameters were identified through Latin Hypercube Sampling (LHS) and Partial Rank Correlation Coefficient analysis (PRCC), and water-level time series were projected onto a low-dimensional principal-component space. Three strategies were compared: inverse LightGBM mapping (PCs → θ), direct GP-BO (θ → J), and a Hybrid approach combining both. The Hybrid strategy achieved performance comparable to direct BO while reducing SWMM evaluations by approximately 40%, demonstrating improved computational efficiency for identifying influential parameters in urban drainage networks.

Keywords: SWMM, GP-BO, Automatic calibration, LightGBM, PCA, Urban drainage

Acknowledgment

This research was supported by the BK21 FOUR (Fostering Outstanding Universities for Research) funded by the Ministry of Education (MOE, Korea) and National Research Foundation of Korea (NRF). This work is financially supported by Korea Ministry of Land, Infrastructure and Transport (MOLIT) as 「Innovative Talent Education Program for Smart City」. This work was supported by Korea Environment Industry & Technology Institute (KEITI) through Research and Development on the Technology for Securing the Water Resources Stability in Response to Future Change Project, funded by Korea Ministry of Climate, Energy and Environment(MCEE)(RS-2024-00332300). This work was supported by Korea Environment Industry & Technology Institute(KEITI) through Technology development project to optimize planning, operation, and maintenance of urban flood control facilities, funded by Korea Ministry of Climate, Energy and Environment(MCEE)(RS-2024-00398012).

How to cite: Jeong, S., Jeon, H., Kim, W., Jeong, J., and Choi, M.: PCA-guided Automatic Calibration of SWMM with Inverse LightGBM and Gaussian Process-based Bayesian Optimization, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-16031, https://doi.org/10.5194/egusphere-egu26-16031, 2026.

A.8
|
EGU26-18396
|
ECS
Axel Giboulot and Christophe Ancey
The most accurate way of measuring water discharge in a mountain stream is to use a concrete structure (like a Parshall flume or a weir), so that the bed elevation is known and hyporheic flow is substantially reduced. This technique is costly, provides only a one-point measurement and it may be overtopped or damaged during floods. An alternative is offered by non-intrusive monitoring techniques, which do not come into contact with the flow.

Non-intrusive monitoring techniques do not measure a river's discharge directly, they extrapolate it from observable features of the free surface.
While empirical relations (e.g. rating curves or velocity profiles) for flows over impermeable walls are widely accepted to infer the discharge from the surface conditions, they fall short in mountain streams where the bed is coarse and permeable. As a result, discharge monitoring in mountain streams, both under normal conditions and during floods, remains inaccurate and calls for a new modeling framework.

Given that free-surface velocity measurements can be noisy and flow parameters (depth, eddy viscosity, porosity) are known with poor precision, we suggest using Bayesian inference to estimate the discharge from the surface velocity while explicitly accounting for parameter uncertainty through prior information.

The proposed approach will be tested using a laboratory flume experiment conducted at the bed roughness scale, based on a refractive index matched scanning (RIMS) setup. This experimental configuration enables direct access to bed porosity and the full velocity field by using glass beads as a sediment analogue and matching the refractive indices of the solid and fluid phases to eliminate optical distortion.

How to cite: Giboulot, A. and Ancey, C.: Relating free-surface flow and discharge in mountain streams, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-18396, https://doi.org/10.5194/egusphere-egu26-18396, 2026.

A.9
|
EGU26-18621
Tobias Houska

Uncertainty in hydrological modeling has an important effect on the reliability of predictions, such as droughts and floods. Uncertainty quantification can expose parameter sensitivities and structural flaws, enabling better calibration and robust risk assessments. However, the use and implementation of suitable methods often hinder good research and thus good model results.

Building on a decade of community work, SPOTPY has evolved into a widely used tool for covering a wide range of peer-reviewed hydrological calibration, uncertainty, and sensitivity analysis techniques. Here, the latest advances in research and practice are presented, featuring an expanded set of optimization algorithms, hydrological performance metrics, and high-throughput workflows that make rigorous parameter exploration accessible, ranging from desktop studies to large computing clusters.

The new release strengthens SPOTPY’s role as a “single entry point” for testing alternative calibration strategies for any hydrological or ecohydrological model. A redesigned model interface simplifies the coupling of external models (from simple conceptual bucket models to fully distributed land‑surface models), while improved I/O handling and database backends streamline storage of millions of simulations for posterior analysis. The availability of global and local optimization methods has been extended and harmonized: alongside classic algorithms such as SCE‑UA, DREAM, ROPE and Monte Carlo sampling, users can now flexibly switch between multi‑objective and single‑objective formulations and customize stopping criteria to balance convergence and computational cost.

For performance evaluation, SPOTPY now offers an enriched library of objective functions and hydrological signatures tailored to discharge and ecohydrological time series, from classical Kling–Gupta Efficiency (parametric and non‑parametric) to hydrological signature-based flow percentile‑based indicators. All metrics are fully integrated into calibration, sensitivity analysis, and uncertainty assessment workflows so that users can, for example, calibrate to traditional goodness‑of‑fit while simultaneously tracking regime‑oriented diagnostics that are critical for low‑flow, flood, or water‑quality applications. Recent case studies demonstrate how these capabilities help quantify trade‑offs between parameter identifiability and process realism in hydrological models under changing climate and land‑use conditions, for which an overview will be presented.

Furthermore, the new features will be illustrated through real-world hydrological applications, highlighting practical guidance on algorithm choice, the diagnostic use of hydrological signatures, and robust uncertainty communication. However, as not every model produces the expected result on the first try, a discussion ground will be provided for problems that are frequently encountered in hydrological modeling.

How to cite: Houska, T.: There is sense in every model: Discover it with SPOTPY, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-18621, https://doi.org/10.5194/egusphere-egu26-18621, 2026.

A.10
|
EGU26-18895
|
ECS
Priyanshu Jain and Manne Janga Reddy

The optimal sensor placement (OSP) within water distribution networks (WDNs) is a critical research area, driven by the need for effective leak detection, localization, and data-driven decision support. Due to the large scale, complex topology, and uncertain hydraulic behavior of WDNs, identifying sensor locations that are both informative and spatially representative remains a challenging task. This study proposes an information-entropy and clustering-based framework for optimal sensor placement, aimed at enhancing leak detection and localization through machine learning in smart water networks.

Hydraulic pressure data are generated using EPANET by introducing a single leak at a time, modeled as an additional demand at each demand node in the network. The pressure signal at each node is treated as a random variable, and its probability distribution is estimated using a histogram-based approach. Shannon information entropy is employed to quantify the uncertainty and sensitivity of nodal pressure responses, where nodes with higher entropy are considered more informative and responsive to system disturbances such as leaks. Mutual information is incorporated to compute shared information between candidate sensor locations. By penalizing nodes that exhibit high redundancy with previously selected sensors, the proposed framework ensures that each sensor contributes unique and complementary information. Furthermore, to guarantee spatially distributed sensors and network-wide coverage, spectral clustering is applied using nodal coordinates and elevation (X, Y, Z), partitioning the network into geographically coherent clusters. Sensors are placed within each cluster at the nodes with maximum penalized entropy score. A greedy search algorithm is employed to maximize this score. Consequently, this integrated framework effectively balances information maximization, redundancy reduction, and spatial representativeness.

The methodology is validated on the benchmark Modena network, a medium-sized gravity-fed WDN consisting of 268 demand nodes, 317 pipes, and four reservoirs. The performance of the information-entropy and clustering-based sensor placement framework was evaluated using spatial classification accuracy metrics at varied distance tolerances (0m to 500m). The multilayer perceptron (MLP) based leak localization model trained using data from the information-entropy and clustering-based OSP at nodes {7, 42, 162, 228, 257} achieved training accuracy of 97.62% and test accuracy of 85.73%. Spatial accuracy results further validate robustness, with localization accuracies of 93.94% within 100 m, improving to 97.95% and 99.72% within 200 m and 500 m tolerance, respectively. Robust performance was maintained even after introducing noise into the data; however, under noisy conditions, the use of spatial accuracy metrics is recommended to effectively predict the leak zone rather than exact node locations. The high spatial accuracies demonstrate the frameworks effectiveness for machine learning–based predictive analytics. The framework supports informed decision-making and provides an efficient solution for smart water network monitoring and management.

How to cite: Jain, P. and Reddy, M. J.: Information entropy and clustering-based sensor placement in water distribution networks for leak detection, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-18895, https://doi.org/10.5194/egusphere-egu26-18895, 2026.

A.11
|
EGU26-20073
|
ECS
Luca Lombardo and Alberto Viglione
Conceptual hydrological models are widely used in both theoretical investigations and operational applications. Their flexibility and relative ease of implementation have contributed to their success in the last decades. Despite their widespread use, conceptual hydrological models are still predominantly applied in a deterministic form (D-model), without explicitly accounting for the inherent uncertainties affecting their predictions. To address this limitation, a broad range of uncertainty estimation methods has been proposed in the literature, spanning from simple resampling techniques to more complex Bayesian frameworks, from approaches that explicitly separate different sources of uncertainty, to methods that aggregate all sources into a single error term. The common objective of these approaches is the transition from a deterministic D-model to a probabilistic or stochastic representation (S-model), typically expressed through an ensemble of model output predictions.
Recently, Koutsoyiannis and Montanari (Koutsoyiannis, D., & Montanari, A., 2022) introduced an innovative methodology, applied to river discharge model outputs, to tackle this problem, departing from the traditional residual-based paradigm adopted by most existing approaches. Their method, known as BLUECAT, instead exploits the dependence structure between D-model predictions and observed discharge, providing a local, data-driven characterization of predictive uncertainty. While the non-parametric nature of BLUECAT offers important advantages, it also entails intrinsic limitations, particularly in the representation of uncertainty near the extremes of the discharge distribution, especially when limited discharge records are available.
This contribution builds upon the original BLUECAT framework by proposing a conceptually equivalent, yet operationally novel, parametric post-processing approach for conceptual rainfall–runoff uncertainty estimation. The method relies on the use of parametric copula models to describe the joint dependence between D-model predictions and discharge observations, enabling the analytical derivation of conditional predictive distributions. This formulation provides an elegant solution to several limitations of the non-parametric approach, including the definition of confidence bands in proximity to extreme flows. In addition, a second parametric variant specifically tailored to high-flow regimes is introduced, allowing for a focused characterization of uncertainty within a restricted range of D-model discharge predictions.
The proposed methods are evaluated through a comparative study over 24 mountainous catchments in the Piedmont region (north-western Italy), considering both calibration and validation periods. The analysis includes reliability metrics for confidence bounds as well as performance indicators for key ensemble properties, such as the ensemble median. The results indicate that the parametric approaches, when short observation records are available, generally yield more robust and reliable uncertainty estimates during validation compared to the original non-parametric BLUECAT. Furthermore, the high-flow-tailored approach outperforms both the non-parametric method and the parametric approach applied over the full discharge range when focusing on extreme flows, thereby improving uncertainty quantification in high-risk hydrological scenarios. 
 
Koutsoyiannis, D., & Montanari, A. (2022). Bluecat: A local uncertainty estimator for deterministic simulations and predictions. Water Resources Research, 58, e2021WR031215. https://doi.org/10.1029/2021WR031215

How to cite: Lombardo, L. and Viglione, A.: Beyond deterministic hydrological modelling: a copula-based uncertainty framework, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-20073, https://doi.org/10.5194/egusphere-egu26-20073, 2026.

A.12
|
EGU26-21104
Alexandre Pryet, Carlos Felipe Marín Rivera, Nicole Fernandez, Marc Saltel, Olivier Atteia, Michel Franceschi, Julio Goncalves, Bruno Hamelin, Pierre Deschamp, Adrien Claveau, and Christelle Marlin

The sustainable yield of aquifer systems in sedimentary basins can theoretically be derived by long-term model simulations considering the adverse effects of pumping, such as streamflow depletion, subsidence, or contamination induced by flow reversals. The implementation of a model-based approach for sustainable yield estimation is often challenged by the lack of knowledge on system properties and (paleo)-recharge rates. Specifically, leakage flows through aquitards generally drive the hydrodynamic response of confined aquifers to pumping, but their properties are poorly constrained by typical observational datasets. Dating methods such as radiocarbon have been widely used to infer residence time in confined aquifer systems. However, their use to constrain flow hydrodynamics in multi-layer systems with a state-of-the-art inverse modeling approach is scarce.

In this study, we investigated the flow dynamics of the Aquitaine Basin located in Southwest France, with an extensive repository of hydrologic and geochemical data spanning several decades. A 2D cross-sectional numerical flow model was developed and extended to simulate reactive transport of radiogenic carbon. An inverse modeling approach was then implemented to estimate model parameters using observed hydraulic heads and 14C activity. This paves the way to a more rational quantification of the sustainable yield of critical resources for the resilience of water supply in a changing world.

How to cite: Pryet, A., Marín Rivera, C. F., Fernandez, N., Saltel, M., Atteia, O., Franceschi, M., Goncalves, J., Hamelin, B., Deschamp, P., Claveau, A., and Marlin, C.: How can inverse reactive transport modelling of radiocarbon improve the estimation of sustainable yield in confined aquifer systems?, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-21104, https://doi.org/10.5194/egusphere-egu26-21104, 2026.

A.13
|
EGU26-21485
Amy Doherty, Emma Woolliams, and Douglas Rao

Observations are key for development of earth and environmental systems models. They can be used for initialisation, verification and evaluation, forcing, constraining and benchmarking. Providing the observed variables required by modellers in a useful format with necessary metadata, including uncertainty characterisation, has traditionally not been seen as the responsibility of the observation providers. This has caused a mismatch between what is provided and what is required, leading to proxy climate data such as reanalyses being widely used in place of true observations. To enhance the uptake and usability of observational datasets, they should be provided with easy access, in easy to use format, clearly documented and with detailed metadata.

The Working Group on Observations for Researching Climate (WGORC) was set up in 2025 by the World Climate Research Program (WCRP) Earth System Modelling and Observations (ESMO) core project. WGORC kicked off in December 2025 and is focused on improving the use of observations throughout climate science, addressing the mismatch mentioned above and ensuring correct use and application of observations to model development.

ESMO working groups function through the activities of panels and task teams which are set up in response to identified needs. WGORC has one existing panel and will be looking to set up at least two more based on the outcomes of the scoping activities currently underway.

The initial WGORC focus areas include:

* Characterisation and communication of observational uncertainties 

* Use of observations in machine learning applications for climate 

* Observations to better understand and model extreme weather and climate events 

* Data rescue and recovery of historical climate observations 

The existing panel, obs4MIPs, is an ongoing community-driven initiative to provide observational datasets in the format to support model benchmarking and evaluation of Earth System Models as part of the Climate Model Intercomparison Projects (CMIP). 

This presentation will describe WGORC in detail and outline the scoping activities in the four focus areas and the ongoing activities of the existing panel obs4MIPs which is currently investigating how to expand its offering to provide point observations as well as gridded, and the provision of associated uncertainty data with each dataset. It will also discuss opportunities on how the community can stay engaged with WGORC and ESMO activities and be consulted during user requirements gathering for climate observations.

How to cite: Doherty, A., Woolliams, E., and Rao, D.: Community needs for Observations to constrain Earth System Models, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-21485, https://doi.org/10.5194/egusphere-egu26-21485, 2026.

Posters virtual: Fri, 8 May, 14:00–18:00 | vPoster spot A

The posters scheduled for virtual presentation are given in a hybrid format for on-site presentation, followed by virtual discussions on Zoom. Attendees are asked to meet the authors during the scheduled presentation & discussion time for live video chats; onsite attendees are invited to visit the virtual poster sessions at the vPoster spots (equal to PICO spots). If authors uploaded their presentation files, these files are also linked from the abstracts below. The button to access the Zoom meeting appears just before the time block starts.
Discussion time: Fri, 8 May, 16:15–18:00
Display time: Fri, 8 May, 14:00–18:00

EGU26-8119 | ECS | Posters virtual | VPS11

Knowledge Distillation of PlanetScope Imagery for Metre-Scale Lake Water-Quality Mapping 

Ying Deng, Daiwei Pan, Simon Yang, and Bahram Gharabaghi
Fri, 08 May, 14:00–14:03 (CEST)   vPoster spot A

Effective management of eutrophication in inland lakes requires spatially continuous information on key water-quality variables at management-relevant scales. However, metre-scale mapping of total phosphorus (reported as “Phosphorus, Total”, PPUT; µg/L) remains difficult to achieve using conventional in-situ sampling, and nearshore gradients and tributary plumes are often poorly resolved by medium-resolution satellite sensors. In this study, we exploit multi-generation PlanetScope imagery (Dove Classic, Dove-R, and SuperDove; 3–5 m, near-daily revisit) to develop a hybrid, physics-informed AI framework for PPUT retrieval in Lake Simcoe, Ontario, Canada. PlanetScope surface reflectance is combined with short-term meteorological descriptors (3–7-day aggregates of air temperature, wind speed, precipitation, and sea-level pressure) and in-situ Secchi depth (SSD) to train five ensemble-learning models (HistGradientBoosting, CatBoost, RandomForest, ExtraTrees, and GradientBoosting) across eight feature-group regimes. Inclusion of SSD yields a substantial performance gain, with mean R² increasing from ~0.67 (SSD-free) to ~0.94 (SSD-aware), confirming that vertically integrated optical clarity is the dominant constraint on phosphorus retrieval and cannot be reconstructed from surface reflectance alone. To enable scalable SSD-free monitoring, we implement a teacher–student knowledge-distillation scheme in which an SSD-aware teacher transfers its representation to a student using only satellite and meteorological inputs. The optimal student, based on a compact subset of 40 predictors, achieves R² = 0.83, RMSE = 9.82 µg/L, and MAE = 5.41 µg/L on unseen monitoring stations, and is applied to 2020–2025 PlanetScope scenes to generate metre-scale PPUT maps. A 26 July 2024 case demonstrates that >97% of the lake surface remains below 10 µg/L, while rare (<1%) but spatially coherent hotspots >20 µg/L coincide with tributary mouths and narrow channels, highlighting priority areas for management intervention. Although demonstrated here for phosphorus, the PlanetScope–KD framework is model-agnostic with respect to the target variable and can be retrained for other water-quality parameters with optical or hydro-meteorological controls, such as chlorophyll-a, dissolved oxygen, and surface water temperature. This opens a pathway toward unified, high-resolution, multi-parameter lake water-quality prediction to support adaptive monitoring and lake-basin management.

How to cite: Deng, Y., Pan, D., Yang, S., and Gharabaghi, B.: Knowledge Distillation of PlanetScope Imagery for Metre-Scale Lake Water-Quality Mapping, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-8119, https://doi.org/10.5194/egusphere-egu26-8119, 2026.

Please check your login data.