HS3.4 | Advanced Stochastic and Geostatistic Methods for Simulation of Hydrological and Environmental Sciences
EDI
Advanced Stochastic and Geostatistic Methods for Simulation of Hydrological and Environmental Sciences
Co-organized by ESSI4/GI2/NP2
Convener: Claus Haslauer | Co-conveners: Fabio Oriani, Mathieu GraveyECSECS, Svenja Fischer, Carolina Guardiola-Albert, Panayiotis DimitriadisECSECS, Emmanouil VarouchakisECSECS
Orals
| Fri, 08 May, 10:45–12:30 (CEST)
 
Room 2.31
Posters on site
| Attendance Fri, 08 May, 14:00–15:45 (CEST) | Display Fri, 08 May, 14:00–18:00
 
Hall A
Posters virtual
| Thu, 07 May, 14:06–15:45 (CEST)
 
vPoster spot A, Thu, 07 May, 16:15–18:00 (CEST)
 
vPoster Discussion
Orals |
Fri, 10:45
Fri, 14:00
Thu, 14:06
In recent years, the field of geostatistics has seen significant advancements. These methods are fundamental in understanding spatially and temporally variable hydrological and environmental processes, which are vital for risk assessment, input for other models, and management of extreme events like floods and droughts.
This session aims to provide a comprehensive platform for researchers to present and discuss innovative applications and methodologies of geostatistics and spatio-temporal analysis in hydrology and related fields. The focus will be on traditional approaches and the assessment of uncertainties, whereas Machine Learning approaches have their specific and other dedicated sessions.

We invite contributions that address the following topics (but not limited to):

1. Spatio-temporal Analysis of Hydrological and Environmental Anomalies:
- Methods for detecting and analyzing large-scale anomalies in hydrological and environmental data.
- Techniques to manage and predict extreme events based on spatio-temporal patterns.

2. Innovative Geostatistical Applications:
- Advances in spatial and spatio-temporal modeling.
- Applications in spatial reasoning and data mining.
- Reduced computational complexity methods suitable for large-scale problems.

3. Geostatistical Methods for Hydrological Extremes:
- Techniques for analyzing the dynamics of natural events, such as floods, droughts, and morphological changes.
- Utilization of copulas and other statistical tools to identify spatio-temporal relationships.

4. Optimization and Generalization of Spatial Models:
- Approaches to optimize monitoring networks and spatial models.
- Techniques for predicting regions with limited or unobserved data e.g., using physical-based model simulations or using secondary variables.

5. Uncertainty Assessment in Geostatistics:
- Methods for characterizing and managing uncertainties in spatial data.
- Applications of Bayesian Geostatistical Analysis and Generalized Extreme Value Distributions.

6. Spatial and Spatio-temporal Covariance Analysis:
- Exploring links between hydrological variables and extremes through covariance analysis.
- Applications of Gaussian and non-Gaussian models in spatial analysis and prediction.

Orals: Fri, 8 May, 10:45–12:30 | Room 2.31

The oral presentations are given in a hybrid format supported by a Zoom meeting featuring on-site and virtual presentations. The button to access the Zoom meeting appears just before the time block starts.
Chairpersons: Claus Haslauer, Svenja Fischer, Carolina Guardiola-Albert
10:45–10:50
Stochastic Modelling
10:50–11:00
|
EGU26-7961
|
Highlight
|
On-site presentation
Lionel Benoit

Stochastic rainfall models are probabilistic tools able to simulate synthetic rainfall datasets with statistical properties that resemble those from observations, which makes them particularly suitable to assess the uncertainty of rainfall estimates and to conduct sensitivity analysis of hydro-meteorological modeling chains. When the focus of the modeling is on spatial and temporal patterns, models based on space-time Gaussian random fields (GRFs) are often used because they enable modeling rainfall at any point of the space-time domain from sparse and heterogeneous data (typically observations from a rain gauge network).

In this presentation I will explore how a new model of space-time, multivariate and non-stationary GRF can be leveraged to improve stochastic rainfall modeling. A parametric transform function is combined with the GRF to account for rainfall intermittency and skewed marginal distribution, which results in a so-called trans-Gaussian (or meta-Gaussian) model. Among the many applications achieved by this flexible trans-Gaussian model I will examine how spatial non-stationarity can model orographic effects, and how multivariate modeling can be used to embed rainfall into a stochastic weather generator including five different variables (rainfall, temperature, wind, solar radiation and humidity).

How to cite: Benoit, L.: Stochastic rainfall modeling using spatio-temporal, multivariate and nonstationary trans-Gaussian random fields, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-7961, https://doi.org/10.5194/egusphere-egu26-7961, 2026.

11:00–11:10
|
EGU26-13518
|
On-site presentation
Gregor Laaha and Johannes Laimighofer

Trends in annual low-flow time series are central to water resources and drought management, yet estimates are strongly affected by serial persistence, and dependence can make persistence appear as trend. We compare nonparametric and parametric methods under short-term autocorrelation and long-term persistence (LTP) and evaluate their reliability with European streamflow data and simulation-based experiments.

For short-term autocorrelation, modified Mann–Kendall approaches with block-bootstrap-based significance correction (BBSMK) and simultaneous bias-corrected prewhitening yield robust results; alternative variants inflate significance and produce implausible findings. Parametric ARIMAX models indicate that, when analyses are based on the water year, only a small share of series require higher autoregressive orders, whereas calendar-year aggregation induces more complex correlation structures and, in turn, unreliable (too low) significance rates.

Under long-term dependence, the nonparametric Mann–Kendall–LTP approach markedly lowers the fraction of significant trends, while FARIMAX models (external trend + LTP) produce similar rates to BBSMK. Yet AIC-based selection typically replaces LTP with short-term autocorrelation, indicating that what appears as persistence is often explainable by short-range dependence.

We finally assess misclassification in parametric and nonparametric trend models under LTP using nature-based simulations across record lengths. Calibrated to stream-gauge records, the simulations test whether series with deterministic trends and short-term autocorrelation—but without true LTP—are misclassified as LTP, and how such misclassification biases trend estimates. Across four scenarios (high/low LTP × significant/non-significant trend), LTP misclassification and trend-detection errors are elevated: with a trend present, short-term autocorrelation is often mistaken for LTP, biasing estimates and reducing power. At hydrologically typical record lengths, errors remain substantial, declining only for extremely long series (1,000–10,000 years); misclassification of short-term correlation as LTP persists even then.

Overall, under common record lengths and dependence structures, deterministic trends are often misinterpreted as long-term persistence—and, conversely, genuine persistence can be mistaken for trend. Therefore, LTP-based trend analyses should be interpreted with caution; typical hydrological records are too short for reliable LTP inference.

How to cite: Laaha, G. and Laimighofer, J.: Trend or persistence: what are we really detecting in annual low-flow time series?, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-13518, https://doi.org/10.5194/egusphere-egu26-13518, 2026.

11:10–11:20
|
EGU26-1406
|
ECS
|
On-site presentation
Jonathan Frank, Thomas Suesse, Shijie Jiang, and Alexander Brenning

Decisions concerning the management of natural resources are often based on binary criteria that determine whether a specific environmental target is met or exceeded. A common example is the designation of “polluted” areas, where mitigation measures must be implemented once concentrations surpass a regulatory threshold. In practice, maps of such exceedances are commonly derived from regionalized concentration estimates. However, most conventional spatial interpolation and prediction procedures introduce systematic bias in the estimated extent of polluted areas.

To overcome this issue, we apply a bias-corrected mapping procedure that is compatible with any geostatistical or machine learning method capable of providing valid probability estimates. For the case study, we mainly focus on a trans-Gaussian regression-kriging (TRGK) framework, selected for its interpretability and transparent decomposition of predictions. To assess the potential added value of nonparametric approaches, we additionally compare TRGK with quantile regression forest (QRF) in a sub-region.

The TRGK model follows a structured, non-stationary design: (i) raw concentrations are transformed to log10 scale; (ii) a nationwide global linear model captures broad-scale relationships; (iii) major hydrogeological districts serve as units for local linear refinements to account for non-stationarity; (iv) residuals are transformed using a Gaussian anamorphosis; and (v) the transformed residuals are interpolated via ordinary kriging, from which probability estimates are derived. This setup improves flexibility while maintaining interpretability and coherent uncertainty quantification.

Bias correction is performed by estimating the total exceedance area implied by the data and determining a calibrated probability threshold that ensures an unbiased delineation of the polluted area. In this study, we jointly evaluate a threshold exceedance criterion and a temporal trend criterion.

Groundwater nitrate mapping at national scale represents a challenging test case due to strong non-normality, spatial heterogeneity, and pronounced non-stationarity. The approach nonetheless performs robustly. Linear model components exhibit R2 values between 0.15 and 0.62, while semivariogram practical ranges vary from 0.3 to 22.3 km. In the sub-region comparison, QRF showed a small discrimination advantage over TRGK (AUC 0.86 vs. 0.82) but relied more heavily on calibration (underestimation without calibration 94.9% vs. 5.1%).

Overall, the results demonstrate that the bias-corrected probability-based framework provides a flexible, robust and- when coupled with geostatistics- transparent solution for large-scale pollution mapping.

How to cite: Frank, J., Suesse, T., Jiang, S., and Brenning, A.: Bias-corrected pollution mapping with non-stationary geostatistics and spatial machine learning for environmental decision making: The case of groundwater nitrate, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1406, https://doi.org/10.5194/egusphere-egu26-1406, 2026.

11:20–11:30
|
EGU26-12514
|
ECS
|
On-site presentation
Chun Zhou, Li Zhou, Luca Brocca, and Dui Huang

Precipitation serves as a critical link between climate and hydrology, with variability shaped by environmental factors that regulate satellite detection under complex conditions. Physical response mechanisms under varying temperature, soil moisture, and pressure remain insufficiently assessed. Using global gauge precipitation and ERA5-Land reanalysis data, we identified HIT, MISS, FALSE events and examined their differential responses to key environmental variables. We demonstrate that HIT events tend to occur under intermediate environmental conditions, with both products sharing similar responses but GSMaP exhibiting slightly smoother temperature signals and IMERG stronger soil-moisture-related variability. MISS events, linked to colder, wetter backgrounds, are associated with larger spread, while FALSE events arise mainly in warm, dry regimes with low soil moisture and more fluctuations in IMERG. Environmental factors modulate detection, with warmer and wetter conditions favoring HIT and suppressing FALSE, while pressure plays a weaker, secondary role. These findings support satellite-based global hydrology and climate-resilience assessment.

How to cite: Zhou, C., Zhou, L., Brocca, L., and Huang, D.: How environmental conditions influence satellite detection of rainfall events, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-12514, https://doi.org/10.5194/egusphere-egu26-12514, 2026.

11:30–11:40
|
EGU26-1813
|
On-site presentation
Cristina Prieto, Dmitri Kavetski, Fabrizio Fenicia, James Kirchner, David McInerney, Mark Thyer, and César Álvarez

 Statistical residual error modelling for hourly streamflow predictions

Cristina Prieto1,2,3, Dmitri Kavetski4,1, Fabrizio Fenicia3, James Kirchner2,5,6, David McInerney4, Mark Thyer4, and César Álvarez1

 

(1) IHCantabria—Instituto de Hidráulica Ambiental de la Universidad de Cantabria, Santander, Spain

(2) Department of Environmental Systems Science, ETH Zürich, Zürich, Switzerland

(3) Eawag, Swiss Federal Institute of Aquatic Science and Technology, Dübendorf, Switzerland

(4) School of Civil, Environmental and Mining Engineering, University of Adelaide, Adelaide, SA, Australia

(5) Swiss Federal Research Institute WSL, Birmensdorf, Switzerland

(6) Department of Earth and Planetary Science, University of California, Berkeley, California, USA

 

Water plays a critical role in societal stability through both its excess and scarcity. Extreme hydrological events can cause substantial human and economic losses, while water scarcity affects essential services such as drinking water supply, food production, and hydropower generation. Reliable streamflow predictions are therefore fundamental for environmental assessments, flood risk management, and Integrated Water Resources Management (IWRM).

Hydrological models are central tools for understanding catchment behaviour and generating predictions to support water-resources assessment, planning, and management. However, their predictive performance strongly depends on the temporal resolution at which they are applied.

At hourly time scales, hydrological processes and associated uncertainties become markedly more complex, particularly in small and mesoscale catchments. Flood peaks may last only a few hours, so daily streamflow predictions can substantially underestimate peak magnitudes; antecedent wetness conditions can evolve rapidly; and the dominant processes controlling short-term streamflow dynamics differ from those governing longer term behavior. For example, over longer time scales, predictions are primarily constrained by mass balance, whereas short-term predictions depend more strongly on dynamics and flow routing.

In addition to classical sources of uncertainty related to data, model structure, and parameters, hourly streamflow predictions often exhibit bias, heteroscedasticity, temporal autocorrelation, and non-stationarity.

Despite their importance, hourly streamflow prediction and uncertainty characterisation have received comparatively less attention than daily-scale studies.

In this work, we use a conceptual hydrological model to generate deterministic hourly streamflow predictions and quantify predictive uncertainty using a residual error modelling framework. Case-study catchments include hydrologically diverse basins in Europe and the United States. Bias, heteroscedasticity, and temporal dependence in model residuals are addressed using Box–Cox transformations and autoregressive and moving average (ARMA) models.

Results indicate that a logarithmic transformation combined with an autoregressive model of order three (AR(3)) provides the most consistent performance across catchments. This work advances streamflow prediction by developing statistically rigorous methods for post-processing the residuals of conceptual hydrological models at the hourly time scale, supporting more reliable hourly streamflow predictions for integrated water resources management and decision-making.

How to cite: Prieto, C., Kavetski, D., Fenicia, F., Kirchner, J., McInerney, D., Thyer, M., and Álvarez, C.: Residual error modelling for hourly streamflow predictions, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1813, https://doi.org/10.5194/egusphere-egu26-1813, 2026.

Geostatistics
11:40–11:50
|
EGU26-5236
|
On-site presentation
Michael Schutte, Leonardo Olivetti, and Gabriele Messori

Scientific publications in the geosciences routinely assess statistical significance in spatially distributed environmental and geophysical data. When statistical significance is indicated, it is most often assessed independently at each grid point, while formal adjustment for multiple testing is rarely applied. However, applying multiple testing corrections, such as the global false discovery rate (FDR) approach is not always straightforward, as environmental and geophysical data are often spatially correlated.

In our work, we highlight how neglecting multiple testing correction can substantially inflate the number of false positives. We further show that commonly used FDR implementations can yield counterintuitive and potentially misleading results when applied to strongly spatially correlated data.

To illustrate the latter point, we provide an example based on near-surface air temperature composites following sudden stratospheric warmings. We first show that when anomalies are spatially coherent, restricting the spatial domain can increase the FDR-adjusted significance threshold. As a result, the same underlying field may display a larger share of statistically significant grid points solely due to domain selection. We analyze the origin of this behavior from a rank-based perspective and discuss its implications for spatial inference and uncertainty quantification in environmental sciences.

Based on these insights, we propose practical recommendations for robust and transparent significance assessment, such as spatially aggregated or spatially aware alternatives. Our results highlight both the need to account for multiple-testing and potential issues with a naïve application and interpretation of FDR correction. While illustrated using atmospheric data, the findings are directly relevant to hydrology and other environmental sciences where statistical significance is assessed across spatial fields.

How to cite: Schutte, M., Olivetti, L., and Messori, G.: Which grid points are statistically significant? Revisiting false discovery rate correction in geospatial data, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-5236, https://doi.org/10.5194/egusphere-egu26-5236, 2026.

11:50–12:00
|
EGU26-1351
|
ECS
|
On-site presentation
Felix Henkel, Jonathan Frank, Thomas Suesse, and Alexander Brenning

The expansion and optimisation of environmental monitoring networks requires the efficient use of limited resources to improve spatial predictions to ensure the protection of human health and ecosystems.

Network densification is a spatial sampling problem that is often addressed by pointwise-prediction uncertainty approaches, which ignore (1) the impact of a new site on its neighbourhood and (2) the binary decision task motivating the monitoring. Active learning (AL) is a machine learning technique that iteratively selects new locations based on the current maximum uncertainty in the available training data. We therefore recast network densification as an AL task and propose model-agnostic acquisition criteria, including a decision-aligned focal logit criterion that prioritises neighbourhoods whose exceedance probabilities lie near regulatory thresholds. A look-ahead criterion based on the expected reduction in prediction standard error (SE) is also examined. In a groundwater nitrate concentration case study, the focal logit criterion consistently selected more informative sites than traditional dispersion- or prediction-SE-based criteria, yielding up to 58 % greater gains in exceedance-mapping accuracy (Cohen’s κ)). Focal logit and SE criteria outperformed pointwise counterparts by ~45 % on average, while the look-ahead criterion performed well but at much higher computational cost.

The proposed framework is simple, generalisable to other environmental pollutants (such as air pollutants), and supports a transparent, decision-oriented monitoring design.

How to cite: Henkel, F., Frank, J., Suesse, T., and Brenning, A.: Geostatistical active learning for expanding monitoring networks for environmental decision making, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1351, https://doi.org/10.5194/egusphere-egu26-1351, 2026.

12:00–12:10
|
EGU26-3193
|
ECS
|
On-site presentation
Olivia L. Walbert, Frederik J. Simons, Arthur P. Guillaumin, and Sofia C. Olhede

Spatial data in the Earth and environmental sciences acquired by instrument collection or simulation are constrained to finite, discrete, (ir)regular grids whose geometry is delineated by a boundary within which missingness, either random or structured, may exist. We model (ir)regularly sampled Cartesian spatial data as realizations of discrete two- and three-dimensional random fields whose covariance structure we estimate parametrically with a spectral-domain maximum-likelihood estimation strategy using the debiased Whittle likelihood, which efficiently counters the effects of aliasing and spectral leakage that arise from finite sampling and boundary effects. We work with the general, flexible Matérn class of covariance functions, which characterizes the shape of a field through three parameters that quantify its amplitude, smoothness, and correlation length. We quantify parameter covariance analytically and asymptotically based on the parametric model and sampling grid alone, agnostic of observed data. Our uncertainty quantification allows us to study how sampling geometry imparts uncertainty on a covariance model and provides a path for optimizing the design of a sampling grid to reduce error for an anticipated model. We formulate several approaches for interrogating our model residuals to interpret where real Earth data depart from the null hypotheses of Gaussianity, stationarity, and isotropy. We explore select case studies that demonstrate the broad applicability of our models across Earth science disciplines and develop software in MATLAB and Python for implementation by domain scientists, in hydrology, and elsewhere.

How to cite: Walbert, O. L., Simons, F. J., Guillaumin, A. P., and Olhede, S. C.: Designing Sampling Strategies for the Efficient Estimation of Parameterized Spatial Covariance Models, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3193, https://doi.org/10.5194/egusphere-egu26-3193, 2026.

12:10–12:20
|
EGU26-22108
|
On-site presentation
Rodrigo Lilla Manzione and Cesar de Oliveira Ferreira Silva

Spatial statistics provides a principled framework for analyzing environmental variables that exhibit spatial dependence, enabling inference and prediction in systems governed by heterogeneous processes. In many hydrogeological applications, the most informative perspective emerges from fusing complementary datasets, for example, sparse groundwater observations and spatially exhaustive remote sensing products. This data fusion is rarely straightforward because data sources often differ in sampling design, uncertainty, and, crucially, spatial support (the area or footprint represented by a measurement). When observations collected at one support are used to predict at another, the change-of-support problem can induce biased variances and degraded predictions if scale effects are ignored. Here, we integrate groundwater levels from a monitoring network with multi-resolution remote sensing covariates to improve groundwater depth mapping while explicitly accounting for support differences. The study targets groundwater level prediction in Southeast Brazil, where relief compartments and land-use patterns generate strong spatial heterogeneity in recharge and water consumption. We combine in situ groundwater table depths observed at 56 monitoring locations with (i) geomorphological information derived from the 30 m TanDEM‑X dataset and (ii) land-surface water consumption represented by 10 m evapotranspiration estimates from SAFER (Simple Algorithm for Evapotranspiration Retrieving). These covariates encode terrain-driven controls and land-use effects that are not fully captured by point measurements alone. Spatial dependence within and across variables is modeled using the Linear Model of Coregionalization (LMC), enabling coherent estimation of direct and cross-variograms. To ensure consistency across supports, we address support homogenization by regularizing point-support variances and cross-structures to a common block support defined on the prediction grid. This regularized LMC is then used within a collocated block cokriging (CBCK) framework, which applies collocated block covariates to enhance block-scale groundwater predictions. Model performance demonstrates substantial gains from explicitly treating change of support and incorporating multi-resolution covariates. CBCK yields reliable groundwater depth predictions with root mean squared error (RMSE) of 0.41 m, markedly outperforming ordinary block kriging (OBK) estimations (RMSE = 2.89 m) and improving upon prior CBCK implementations that relied on coarser (500 m) evapotranspiration inputs (RMSE = 0.49 m). Beyond accuracy improvements, the resulting maps better reflect the coupling between land-use water demand, terrain-driven controls, and groundwater levels, supporting groundwater management decisions relevant to agronomic planning and ecosystem sustainability. The proposed methodology is transferable to other aquifer systems and can be adapted to alternative remote sensing products and field measurements to explore climate, land use, and hydrogeology interactions across spatial scales.

How to cite: Lilla Manzione, R. and de Oliveira Ferreira Silva, C.: Multi-source data fusion to enhance groundwater levels prediction: merging monitoring networks and orbital remote sensing datasets, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-22108, https://doi.org/10.5194/egusphere-egu26-22108, 2026.

12:20–12:30
|
EGU26-7835
|
ECS
|
On-site presentation
Meng Lu and Jiong Wang

High-resolution geospatial prediction and satellite image downscaling are increasingly enabled by advances in machine learning and the availability of fine-scale covariates. However, predicted maps are often delivered on arbitrary grids that are not justified by the sampling density of observations. While uncertainty can be quantified at unobserved locations, the spatial scales over which predictions are supported by the data and the modelling process are typically not characterized. Besides computational and storage costs, critical consequences including over-interpretation, modelling noise, and most importantly, the apparent predictive resolution of spatial products can be misleading for downstream applications, potentially affecting scientific conclusions. An example is the use of predicted air pollution maps in health cohort studies to assess exposure–response relationships. This raises a fundamental but under-addressed question: what is the finest spatial resolution at which predictions are meaningfully supported by the data (and model)?

We investigate how to meaningfully determine the predictive resolution in regression models by linking sampling density and model parameters in the frequency domain through spectral analysis. Two challenges are 1) to identify the sampling density in the multi-dimensional feature space, where the sampling typically becomes irregular; and 2) how to relate the frequency in the feature space to the spatial resolution. Using simulated and real-world geospatial datasets, we show that some arbitrarily selected output resolutions in existing literatures could exceed the data-supported predictive resolution, and could induce unnoticed biases or change-of-support issues in downstream analyses.

How to cite: Lu, M. and Wang, J.: Reliable Predictive Resolution in GeospatialModelling, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-7835, https://doi.org/10.5194/egusphere-egu26-7835, 2026.

Posters on site: Fri, 8 May, 14:00–15:45 | Hall A

The posters scheduled for on-site presentation are only visible in the poster hall in Vienna. If authors uploaded their presentation files, these files are linked from the abstracts below.
Display time: Fri, 8 May, 14:00–18:00
Chairpersons: Claus Haslauer, Emmanouil Varouchakis, Mathieu Gravey
general
A.29
|
EGU26-1449
Javier Valdes-Abellan, Liangyu Ta, and Chen Yu

Hydrological extreme records in many regions in the world may include observations from different genesis and levels of extremeness forming a characteristic “separation phenomenon’ that limits the effectiveness of traditional distributions such as the Gumbel and log-Pearson Type III models, and in such mixed extreme populations, the Two-Component Extreme Value (TCEV) distribution is better suited. However, conventional fitting approaches tend to emphasize the abundant ordinary data because of the scarcity of right-tail observations, which results in inaccurate predictions of high quantiles. Nevertheless, accurate representation of the upper tail (i.e., the high-value ranges of the cumulative distribution function, CDF) is essential for flood risk evaluation and the design of hydraulic structures. To address this issue, this study introduces a new TCEV fitting approach (SR-MWS) aimed at improving right-tail performance. In the new proposal, the dataset is first approximated using a piecewise two linear regression, and the slope ratio between the two parts (R = S1/S2) is used to assess whether TCEV modeling is appropriate or not (if R > 1.5, the dataset is regarded as suitable for TCEV fitting). Following, three weighting strategies—linear, quadratic, and exponential—are applied sequentially to obtain the final TCEV parameters. A partitioned scoring framework is then used to select the most suitable weighting scheme, emphasizing the mid-to-upper CDF range F(x) ∈ [0.6, 1.0], which corresponds to return periods from about 2.5 years to more than 200 years, while also considering overall fit quality. Our results show that the proposed method yields more accurate estimates for extreme values than conventional techniques and exhibits consistent performance for both peak-flow and precipitation datasets. Beyond hydrological applications, it provides an automated and robust tool for modeling extreme events and supporting risk assessment in fields characterized by mixed-population data with a pronounced dog-leg structure.

How to cite: Valdes-Abellan, J., Ta, L., and Yu, C.: New Proposal for maximum hydrological events fitting showing the ‘separation phenomenon’ with flexible TCEV Distribution , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1449, https://doi.org/10.5194/egusphere-egu26-1449, 2026.

A.30
|
EGU26-14278
Peter Gorsevski and Ivica Milevski

This study investigates multilevel flood susceptibility mapping at the national scale in North Macedonia, utilizing 328 historical flood events, 14 conditioning factors derived from a digital elevation model, simplified lithology, and computed direct runoff. The methodology integrates fuzzy set theory (Fuzzy), analytic hierarchy process (AHP), weighted linear combination (WLC), and random forest (RF) approaches. The two-stage process employs distinct sets of conditioning factors in sequential flood susceptibility mapping: first, generating Fuzzy/AHP/WLC predictions and pseudo-absence data, and second, producing five RF predictions by varying pseudo-absences and binary cutoffs. Validation results indicate that the very high susceptibility class (0.8–1.0) of the Fuzzy/AHP/WLC model predicted 46.6% of flood pixels within 31.6% of the total area. In comparison, the very high susceptibility class of the RF models predicted 88.5%, 78.3%, 60.6%, 48.5%, and 28.3% of flood pixels within 54.7%, 42.2%, 30.5%, 27.0%, and 25.1% of the total area, respectively. The RF models achieved area under the curve (AUC) values exceeding 0.850, with a maximum of 0.966. Furthermore, a standard deviation map derived from the RF models identified regions of high and low uncertainty, highlighting areas for potential methodological improvement and targeted sampling. The results also show the promise of the multilevel approach for mapping flood susceptibility and call for more research into its potential for future studies and real-world applications.

How to cite: Gorsevski, P. and Milevski, I.: Multilevel flood susceptibility mapping by fuzzy sets, analytical hierarchy process, weighted linear combination and random forest, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-14278, https://doi.org/10.5194/egusphere-egu26-14278, 2026.

A.31
|
EGU26-4505
Ankur Roy and Tapan Mukerji

Spatial and temporal datasets that comprise distributions of events along a transect/timeline together with their magnitudes can display scale-dependent changes in persistence or anti-persistence that may contain signatures of underlying physical processes. Lacunarity is a technique that was originally developed for multiscale analysis of data and characterizes the distribution of spaces or gaps in a pattern as a function of scale. In this study, we demonstrate how lacunarity may be modified in order to reveal scale-dependent changes in 1-dimensional data related to fractures, sedimentary layering and rainfall. In order to address whether fractures found along a 1-dimensional transect (scanline) occur in clusters, we compare the lacunarity of a given fracture-spacing data to a theoretical random lacunarity curve. Further, we introduce the concept of 1st derivative of log-transformed lacunarity and demonstrate that this function can find the inter-cluster spacing and possible fractal behaviour over certain scales. It will be demonstrated how this same technique may be applied to a time-series, e.g., rainfall data, to see whether such events occur in clusters over certain time-scales. Next, the “event magnitudes” (e.g., fracture aperture) were added to each event data point (e.g., fracture) thus, yielding a 1-dimensional non-binary dataset and it was tested whether the dataset shows scale-dependent changes in terms of anti-persistence and persistence. The concept of lacunarity ratio, LR, is introduced, which is the lacunarity of a given dataset normalized to the lacunarity of its random counterpart. This randomization however, is different from the one used in the previous technique. In case of our fracture dataset for example, the random sequence is generated by leaving the locations of fractures unaltered and randomly reallocating the magnitudes along the dataset. It was demonstrated that LR can successfully delineate scale-dependent changes in terms of anti-persistence and persistence. In addition to the fracture data already mentioned here (spacing and apertures from NE Mexico), the one used for developing this technique, it was applied to two different types of data: a set of varved sediments from Marca Shale and, a hundred-year rainfall record from Knoxville, TN, USA. While the fracture data showed anti-persistence at small scales (within cluster) and random behavior at large scales, the rainfall data and varved sediments both appear to be persistent at small scales becoming random at larger scales. It was no surprise to find such striking similarity between the spatial “sedimentary” data and the time-dependent rainfall data because in rock records, the former is often considered to be a proxy for the latter. In general, such differences in behavior with respect to scale-dependent changes in anti-persistence to random, persistence to random, or otherwise, maybe be related to differences in the physicochemical properties and processes contributing to multiscale datasets.

How to cite: Roy, A. and Mukerji, T.: Identifying Scale-dependent Spatial and Temporal Patterns in Earth Science Data: Lacunarity-based Techniques, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-4505, https://doi.org/10.5194/egusphere-egu26-4505, 2026.

A.32
|
EGU26-9994
|
ECS
Kassandra Jensch, Márk Somogyvári, and Tobias Krüger

Nitrate groundwater pollution threatens the quality of drinking water and is directly linked to intensive fertiliser inputs on agricultural fields. To reduce pollution from agricultural sources, areas with, or at risk of, elevated nitrate concentrations must be designated as Nitrate Vulnerable Zones (NVZs) under the European Nitrates Directive. In Germany, as elsewhere in Europe, the designation of NVZs follows a binary classification scheme that does not account for uncertainties in the underlying data and interpolation method. We present an alternative geostatistical framework that explicitly introduces uncertainties into the established designation framework, enabling a more accurate assessment of nitrate groundwater pollution. Using a Bayesian Gaussian process model, nitrate concentrations in groundwater were predicted across the federal state of Brandenburg, Germany, where nitrate pollution is an acute problem. Our model specifically incorporates measurement errors as well as systematic biases from different observation types. The model allows for the calculation of exceedance probabilities which provides a continuous representation of nitrate pollution risk across space, relative to the legal nitrate limit of 50 mg/L. We show that the majority of agricultural land in the study area has at least a 50% probability of exceeding this limit. Additionally, measurement errors were identified as the main source of uncertainty in estimated nitrate concentrations, leading to relatively wide posterior predictive distributions. The results indicate that areas with high exceedance probability extend beyond currently designated NVZs. Unlike the established designation workflow, the proposed approach accounts for the complex reality and uncertainty of nitrate pollution in groundwater and can be readily extended to other countries in the EU and beyond. This enables a more robust and transparent designation of NVZs, and demonstrates the value of explicitly incorporating uncertainty into environmental modelling in high-profile policy settings.

How to cite: Jensch, K., Somogyvári, M., and Krüger, T.: Probabilistic mapping of groundwater nitrate pollution using a Bayesian Gaussian process model, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-9994, https://doi.org/10.5194/egusphere-egu26-9994, 2026.

Streamflow
A.33
|
EGU26-7544
|
ECS
Farshid Alizadeh, Raphael Bunel, Nicolas Lecoq, and Yoann Copard

Integrated landscape-evolution models require groundwater models that are computationally efficient, groundwater component that remains stable over multidecadal simulations, and strong coupling with surface hydraulics and sediment transport. In CLiDE, which is built on CAESAR–Lisflood, the backward-Euler groundwater update is simple, but as grid resolution or hydraulic diffusivity increases, it becomes highly restrictive due to the diffusion-type Courant–Friedrichs–Lewy (CFL) stability constraint. We present a redesign of CLiDE’s groundwater module that provides two complementary pathways: a behavior-preserving optimized explicit solver and a fully implicit formulation based on backward-Euler time integration. The implicit approach uses a Picard iteration to address the nonlinearity of unconfined transmissivity and the sparse symmetric positive-definite systems with a preconditioned conjugate-gradient solver. We benchmark both solvers across 25 years in fully coupled hydro-geomorphic experiments at the 104 km² Orgeval catchment in north-central France using hourly and daily groundwater coupling intervals. The implicit solver achieves a water mass balance at the catchment scale within 0.1% while remaining unconditionally stable at daily time steps and achieving solutions comparable to the hourly implicit solution. Groundwater head diagnostics are typically within 0.01 m of each other. The consistency in outlet hydrographs, inundation patterns, and long-term sediment-export behavior indicates that daily implicit coupling, in this case, can be selected based on process time scales, and not on numerical stability. Moreover, the optimized explicit solver accelerates the legacy scheme by 1.3 to 1.6 times refinements to specific algorithms, with no change in numerical outputs. Collectively, these advances enhance CLiDE's capability for additional fully coupled, long-duration simulations and suggest a preference between efficiency-oriented explicit updates and robustness-oriented implicit integration.

How to cite: Alizadeh, F., Bunel, R., Lecoq, N., and Copard, Y.: Toward Stable Groundwater–Surface Water Coupling in Landscape Evolution Models, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-7544, https://doi.org/10.5194/egusphere-egu26-7544, 2026.

A.34
|
EGU26-247
Addition of Process-Based Stream Temperature Modeling Capabilities to MODFLOW 6
(withdrawn)
Eric Morway, Katie Fogg, Alden Provost, Christian Langevin, Joseph Hughes, and Martijn Russcher
A.35
|
EGU26-1167
|
ECS
Injila Hamid and Vinayakam Jothiprakash

Hydrological models are vital for understanding water resources and their responses to environmental and climatic changes, but their accuracy depends strongly on input data quality. This study evaluates how noise reduction in meteorological inputs influences the performance of the SWAT hydrological model for the lower Columbia River basin. Wavelet Transform (WT) was applied for partial denoising, while Singular Spectrum Analysis (SSA) was used for both partial and full noise removal. SSA allows extraction of trend, periodic, and noise components individually from time series data. Results indicate that partial denoising using WT significantly improves model performance, increasing the correlation coefficient (r) and Nash–Sutcliffe Efficiency (NSE) by 2 to 5%, Kling-Gupta Efficiency (KGE) by 16%, and reducing RSR by 4%, along with a notable reduction in PBIAS (from −4.7 to +1.3). The partially denoised WT model achieved r = 0.91, NSE = 0.81, PBIAS = 1.30, KGE = 0.88, and RSR = 0.45, outperforming both the base and fully denoised models. The comparative analysis shows that completely removing noise offers limited benefits and may suppress natural variability, while partial denoising provides an optimal balance between data reliability and model precision. These findings highlight the importance of appropriate input-data preprocessing in improving hydrological model performance and reducing uncertainty in water resource assessments.

How to cite: Hamid, I. and Jothiprakash, V.: Enhancing Streamflow Simulations Through Input Data Denoising, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1167, https://doi.org/10.5194/egusphere-egu26-1167, 2026.

Posters virtual: Thu, 7 May, 14:00–18:00 | vPoster spot A

The posters scheduled for virtual presentation are given in a hybrid format for on-site presentation, followed by virtual discussions on Zoom. Attendees are asked to meet the authors during the scheduled presentation & discussion time for live video chats; onsite attendees are invited to visit the virtual poster sessions at the vPoster spots (equal to PICO spots). If authors uploaded their presentation files, these files are also linked from the abstracts below. The button to access the Zoom meeting appears just before the time block starts.
Discussion time: Thu, 7 May, 16:15–18:00
Display time: Thu, 7 May, 14:00–18:00

EGU26-8394 | ECS | Posters virtual | VPS10

Uncertainty Evaluation of Hydraulic Jumps in Open-Surface Flows 

Simon Cuny, Panayiotis Dimitriadis, Demetris Koutsoyiannis, G.-Fivos Sargentis, and Theano Iliopoulou
Thu, 07 May, 14:06–14:09 (CEST)   vPoster spot A

The hydraulic jump is considered to have one of the largest energy losses in the field of Hydraulics. These losses are caused during transition from super-critical to sub-critical flow conditions in the case of open-surface flows. In this study, we focus on a laboratory-scale hydraulic jump combining both experimental measurements and model simulations using theoretical arguments. The main objective is to identify, quantify, and interpret the uncertainty in both cases through key parameters, such as the (sub/super) critical depths and channel geometry, for various flow conditions, with emphasis on energy dissipation, turbulence, mixing, regime transitions, and flow stability characteristics .

How to cite: Cuny, S., Dimitriadis, P., Koutsoyiannis, D., Sargentis, G.-F., and Iliopoulou, T.: Uncertainty Evaluation of Hydraulic Jumps in Open-Surface Flows, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-8394, https://doi.org/10.5194/egusphere-egu26-8394, 2026.

EGU26-10560 | ECS | Posters virtual | VPS10

Uncertainty Evaluation of Hydraulic Losses in Closed Pipes 

Marine Bourbon, G-Fivos Sargentis, Theano Iliopoulou, Demetris Koutsoyiannis, and Panayiotis Dimitriadis
Thu, 07 May, 14:09–14:12 (CEST)   vPoster spot A

In a context where energy efficiency is a major concern, studying linear and local head-losses in hydraulic networks is essential. These losses are mainly caused by internal fluid friction and network singularities (such as bends, section changes, valves, etc.), have a direct impact on water transport efficiency and management. In this study, we focus on a laboratory-scale hydraulic network combining both experimental measurements and model simulations using theoretical arguments and the EPANET software. The main objective is to identify, quantify, and interpret the uncertainty in both linear and typical local head-losses through key parameters, such as the friction factor and the local-loss coefficient, for various flow conditions.

How to cite: Bourbon, M., Sargentis, G.-F., Iliopoulou, T., Koutsoyiannis, D., and Dimitriadis, P.: Uncertainty Evaluation of Hydraulic Losses in Closed Pipes, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-10560, https://doi.org/10.5194/egusphere-egu26-10560, 2026.

Please check your login data.