HS7.2 | Precipitation modelling: uncertainty, variability, and downscaling
Precipitation modelling: uncertainty, variability, and downscaling
Co-organized by AS1/NP3
Convener: Alin Andrei Carsteanu | Co-conveners: Nikolina Ban, Roberto Deidda, Giuseppe Mascaro, Dongkyun Kim
Orals
| Tue, 05 May, 14:00–18:00 (CEST)
 
Room 3.16/17
Posters on site
| Attendance Tue, 05 May, 08:30–10:15 (CEST) | Display Tue, 05 May, 08:30–12:30
 
Hall A
Orals |
Tue, 14:00
Tue, 08:30
The statistical characterization and modelling of precipitation are crucial in a variety of applications, such as flood forecasting, water resource assessments, evaluation of climate change impacts, infrastructure design, and hydrological modelling. This session aims to gather contributions on research, advanced applications, and future needs in the understanding and modelling of precipitation, including its variability at different scales and its sources of uncertainty.

Contributions focusing on one or more of the following issues are particularly welcome:
- Process conceptualization and approaches to modelling precipitation at different spatial and temporal scales, including model parameter identification, calibration and regionalisation, and sensitivity analyses to parameterization and scales of process representation.
- Novel studies aimed at the assessment and representation of different sources of uncertainty of precipitation, including natural climate variability and changes caused by global warming.
- Uncertainty and variability in spatially and temporally heterogeneous multi-source ground-based, remotely sensed, and model-derived precipitation products.
- Estimation of precipitation variability and uncertainty at ungauged sites.
- Modelling, forecasting and nowcasting approaches based on ensemble simulations for synthetic representation of precipitation variability and uncertainty.
- Machine-learning approaches for precipitation modelling, forecasting, and downscaling: Machine-learning and hybrid (physics-informed) methods for precipitation simulation, uncertainty quantification, bias correction, and spatio-temporal downscaling, including baseline comparisons, cross-climate transfer tests, and evaluations of explainability and robustness.
- Scaling and scale invariance properties of precipitation fields in space and/or in time.
- Dynamical and statistical downscaling approaches to generate precipitation at fine spatial and temporal scales from coarse-scale information from meteorological and climate models.

Orals: Tue, 5 May, 14:00–18:00 | Room 3.16/17

The oral presentations are given in a hybrid format supported by a Zoom meeting featuring on-site and virtual presentations. The button to access the Zoom meeting appears 15 minutes before the time block starts.
Chairpersons: Nikolina Ban, Alin Andrei Carsteanu
14:00–14:05
14:05–14:15
|
EGU26-16535
|
On-site presentation
Cesar Arturo Sanchez Peña, Francesco Marra, and Marco Marani

Reliable estimates of extreme precipitation are essential for understanding, predicting, and mitigating natural disasters. However, global-scale assessments are limited by the sparse and uneven distribution of ground-based observations. Satellite-based rainfall products provide valuable support for extreme value analysis, but their applicability is constrained by high uncertainty and coarse spatial resolution. The coarse resolution of global datasets (100–600 km² grids) prevents direct comparison with point-scale extreme value estimates, as point and area-averaged statistics differ inherently.

This study addresses this limitation by applying a downscaling approach for extreme-value statistics based on random field theory and the Metastatistical Extreme Value Distribution (MEVD). The method exploits the autocorrelation structure of precipitation fields and is applied to each product at grid cells corresponding to rain gauge locations. Six remote sensing and reanalysis (RSR) products, along with their ensemble, are evaluated using a rain gauge network in Italy.

Downscaled estimates of daily 50-year return period precipitation are compared with corresponding estimates derived from rain gauge time series, considering both individual products and their ensemble median. To further improve the accuracy of satellite maps, two bias correction techniques are applied: quantile mapping and linear regression. The final results show that the ensemble obtained from the median of the RSR products provides the best overall performance.

This research was supported by the "raINfall exTremEs and their impacts: from the local to the National ScalE" (INTENSE) project, funded by the European Union - Next Generation EU in the framework of PRIN (Progetti di ricerca di Rilevante Interesse Nazionale) programme (grant 2022ZC2522). Marco Marani was also supported by the RETURN Extended Partnership and received funding from the European Union Next-GenerationEU (National Recovery and Resilience Plan – NRRP, Mission 4, Component 2, Investment 1.3 – D.D. 1243 2/8/2022, PE0000005).

How to cite: Sanchez Peña, C. A., Marra, F., and Marani, M.: Estimates of Point Rainfall Extremes from Satellite Precipitation Products: Application and bias correction in Italy, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-16535, https://doi.org/10.5194/egusphere-egu26-16535, 2026.

14:15–14:25
|
EGU26-9128
|
On-site presentation
Marc Schleiss and Auguste Gires

Downscaling of rainfall time series is the process of transforming rainfall data from a coarse temporal resolution (e.g., daily or hourly totals) into finer time scales (e.g., minutes) while preserving key statistical and physical characteristics of the original data. Downscaling techniques are widely used in hydrology, urban drainage design, flood modeling, and climate impact studies where fine-resolution rainfall data are essential for simulating hydrological response and studying the impact of extreme rainfall events.

Numerous stochastic downscaling approaches have been proposed in the literature, including point process models, random cascades, Markov chains, and weather generators, each designed to reproduce specific rainfall characteristics such as intermittency, intensity distributions, and temporal dependence. However, these methods are typically developed and evaluated independently, often using different datasets and climates, which makes it hard to assess their relative strengths and limitations.

This study presents the first joint and systematic comparison of two independently developed, state-of-the-art stochastic rainfall downscaling methods based on random cascades. Specifically, the Standard and Blunt extension cascades derived from the Universal Multifractal (UM) theory are compared with the Equal-Depth Area (EDA) approach. The methods are applied to 300 high-resolution (1-minute) rainfall events in the Netherlands and France, using increasingly challenging downscaling ratios of 4, 16, and 64. The raw data was collected with the help of optical disdrometers (OTT Parsivel2) located at three different sites.

We analyze (i) the estimation and selection of cascade generator models and their impact on performance going from event based to climatic average key parameters, (ii) the statistical properties of the downscaled rainfall time series across scales, events and cascade types, using both standard scores, quantile comparison and Universal Multifractal analysis and (iii) the relative strengths and limitations of each method in terms of ensemble spread, temporal dependence structure and extreme rainfall reproduction. By jointly evaluating multiple methods on identical datasets, we aim to advance the science behind stochastic rainfall disaggregation and lay the foundation for further model refinements and application-driven method selection.

How to cite: Schleiss, M. and Gires, A.: One Dataset, Multiple Cascades: Insights from a Joint Evaluation of Stochastic Rainfall Downscaling Methods in France and the Netherlands, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-9128, https://doi.org/10.5194/egusphere-egu26-9128, 2026.

14:25–14:35
|
EGU26-1043
|
ECS
|
On-site presentation
Ali Ulvi Galip Senocak

Precipitation drives the hydrologic cycle and directly impacts sectors from agriculture to electricity generation. However, modeling its statistical distribution is challenging. Precipitation data typically consists of frequent dry days with zero values mixed with rare, extreme events. Both ends of this spectrum can cause disasters, such as flash floods or severe droughts. In the Eastern Mediterranean, this challenge is complicated by complex topography and changing climate patterns. While machine learning (ML) models are widely used for classification or regression of the precipitation, they often treat large areas as uniform regions. However, this generalization misses important local features, such as orographic lifting along mountains or rain shadows in interior basins. Furthermore, most operational models focus only on minimizing error metrics through exact point predictions. Similar to the spatial generalization, this approach yields another problem by ignoring the forecast uncertainty, which is essential for risk-based decision-making.

This study addresses these issues by developing a spatially explicit deep learning framework based on the Probability Integral Transform (PIT). Training models on raw precipitation amounts often leads to underestimating extremes and assigning trace amounts to dry days because machine learning models tend to regress to the mean or the overrepresented classes. To solve this, the target variable (i.e., precipitation based on EOBS data) is transformed into a probability space. Each 0.1-degree pixel is normalized using its own cumulative distribution function (CDF) calculated from the 1985–2015 climatology. Here, instead of a fixed baseline assumption, the Pettitt test is applied to each pixel to detect structural breaks in the historical time series. Yet, this is applied with a condition that at least the last 10 years (2005–2015) are preserved for the CDF analysis, to ensure the approach has enough data. This ensures that the reference climatology reflects the current hydro-climatic conditions.

The deep learning model utilized in this study uses downscaled Global Forecasting System (GFS) forecasts with a 24-hour horizon. To capture the vertical structure of the atmosphere, inputs include wind components (u, v), geopotential height, and specific humidity at 500, 700, and 850 hPa pressure levels. This multi-level approach allows the model to learn the interactions between large-scale circulation, mid-tropospheric moisture transport, and low-level topographical effects. This offers a significant physical advantage over surface-only models. The study covers the period from 2015 to 2025, divided into training (2015–2020), hyperparameter tuning and validation (2020–2022), and testing (2022–2025) sets.

Finally, the deep learning model is extended with conformal prediction to bridge the aforementioned gap between statistical accuracy and yielding exact values. Unlike traditional approaches with a specific error distribution (e.g., Gaussian) assumption, conformal prediction yields distribution-free prediction intervals with a coverage guarantee. This results in adaptive confidence bounds, which can be interpreted with a widened confidence interval during unstable weather patterns and a narrowed one during stable atmospheric conditions. Consequently, the proposed approach ensures that the output is not just a forecast, but a reliable measure of its certainty across the diverse climates and topography of the Eastern Mediterranean.

How to cite: Senocak, A. U. G.: Probabilistic Precipitation Forecasting over the Eastern Mediterranean via PIT-Normalized Conformal Quantile-MOS, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1043, https://doi.org/10.5194/egusphere-egu26-1043, 2026.

14:35–14:45
|
EGU26-2939
|
On-site presentation
Hannes Müller-Thomy, Gioia Groth, Sinuhé Alejandro Sánchez Martínez, Maritza Liliana Arganis Juárez, and Kai Schröter

Temporal high-resolution design rainfall is frequently required for the dimensioning of critical infrastructure. While daily precipitation time series are generally of sufficient length to derive design rainfall for high return periods (e.g. T=100 years), the limited length of high-resolution time series often only allows for the reliable derivation of lower return periods.

Using the proposed duration adjustment factors (DAFs), design rainfall can be scaled from coarser duration levels to finer duration levels as D={5 min, 1 h}. The DAFs were derived and evaluated nationwide for Germany based on the national rainfall extreme value catalogue KOSTRA-DWD-2020 data for various durations D and return periods T (D={5 min, …, 24 h}, T={1 year, …, 100 years}). In addition, the influence of physiographic characteristics (climate zone, land use, elevation, slope, and distance to the sea) was investigated using Spearman’s rank correlation coefficient ρ for continuous variables and the effect size η² for categorical characteristics.

The DAFs depend strongly on the basis duration level (D=24 h or D=1 h) from which the scaling is applied, but show only a weak dependence on the considered return period. Elevation exhibits a weak to moderate influence, which is greater than the influence of slope and distance to the sea. Climate zone has a moderate effect on the DAFs, whereas land use exerts only a weak influence.

For 1,414 selected KOSTRA-DWD-2020 grid cells design rainfall values with D={5 min, 60 min} were generated from daily design rainfall values (D=1 day), and validated with the original high-resolution design rainfall values from the KOSTRA-DWD-2020. The impact of taking elevation into account when deriving the DAFs was examined as well. Three elevation clusters were defined, and the DAFs were derived (i) separately within each cluster and (ii) without considering clustering. Without clustering, the generation of design rainfall from an initial duration of D=1 day with T=100 years results in a relative RMSE (rRMSE) of 10 % for D=1 h, which is below the data-based uncertainty of 25 % reported by KOSTRA-DWD-2020. For D=5 min, a rRMSE of 15 % is obtained, which is slightly lower than the KOSTRA-DWD-2020 uncertainty of 18 %. Clustering leads to only a minor improvement in the median performance (considering all 1,414 grid cells), but results in a substantial reduction in the spread, i.e. the resulting uncertainties. Notably, the quality of the generated design rainfall does not deteriorate when DAFs for T=2 years are used instead of those for T=100 years, although the former can already be estimated on the basis of relatively short time series.

Consequently, the DAF approach provides a solution for deriving design rainfall for short durations and high return periods in regions where long observed daily precipitation time series are available, but only short high-resolution precipitation records exist, which is the case in most regions worldwide.

How to cite: Müller-Thomy, H., Groth, G., Sánchez Martínez, S. A., Arganis Juárez, M. L., and Schröter, K.: Generation of high-resolution design rainfall using duration adjustment factors , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-2939, https://doi.org/10.5194/egusphere-egu26-2939, 2026.

14:45–14:55
|
EGU26-15956
|
On-site presentation
Li-Pen Wang, Chien-Yu Tseng, and Christian Onof

Stochastic convective storm generators are widely used for hydrological and climate-impact applications; however, most existing methods suffer from two fundamental limitations. First, once a convective cell is sampled, its properties are typically assumed to remain constant throughout its lifetime, neglecting the intrinsic evolution of cell intensity, size, and structure during growth and decay. Second, storm events are commonly generated by repeatedly sampling cell properties from fixed distributions, which limits inter-event variability and prevents systematic modulation of storm characteristics by large-scale weather or climate conditions, despite growing evidence that convective cell properties depend on variables such as near-surface temperature.

To address these limitations, this study develops a spatial–temporal convective storm generator that explicitly represents the lifecycle evolution of individual convective cells and its dependence on temperature. Storm arrivals are described using a point-process formulation, while individual storms are modelled as clusters of rainfall cells whose intensity and geometric properties evolve dynamically through time. The temporal evolution of cell properties is governed by a copula-based lifecycle model, within which key statistical parameters are conditioned on near-surface temperature using a regression-based model. Although the temperature dependence is introduced at the level of individual cell evolution, it propagates through the generator to influence storm-scale structure and inter-event variability.

The model is calibrated using 167 convective storm events observed over the Birmingham region (UK) between 2005 and 2017, identified and tracked with a state-of-the-art storm-tracking algorithm that provides detailed information on cell tracks and physical properties, including rainfall intensity, spatial extent, lifetime, storm duration, and motion. Results show that the proposed generator more realistically reproduces observed intra-event evolution, storm-to-storm variability, and extreme rainfall behaviour than conventional generators based on stationary cell assumptions. The resulting temperature-dependent storm generator offers a computationally efficient and physically consistent alternative to convection-permitting models for applications requiring large ensembles of convective rainfall realisations.

How to cite: Wang, L.-P., Tseng, C.-Y., and Onof, C.: Spatial-temporal modelling of convective storms with temperature-conditioned convective cell lifecycles, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-15956, https://doi.org/10.5194/egusphere-egu26-15956, 2026.

14:55–15:05
|
EGU26-18143
|
On-site presentation
Thomas Frisius, Torsten Weber, Sophie Biskop, Muhammad Fraz Ismail, and Francois Engelbrecht

This study addresses the challenges of simulating precipitation in South Africa using the convection-permitting climate model REMO-NH. In the WaRisCo project, which focuses on hydroclimatic extremes under a changing climate, a realistic representation of precipitation is essential for providing suitable forcing data for hydrological modelling. Traditional regional climate models (RCMs) with resolutions of about 11km have the limitation of not accurately reproducing extreme precipitation events such as thunderstorms. Convection-permitting RCMs (CP-RCMs) represent an alternative that offers a higher resolution and explicit simulation of convection.

For the study, the non-hydrostatic climate model REMO-NH is adopted with a resolution of about 3 km and driven by ERA5 using the double nesting technique. It enables explicit simulation of deep cumulus clouds with high vertical velocities. As entrainment of ambient air strongly influences precipitation, its representation depends critically on horizontal turbulent transfer in the model. In the standard model setup, second-order horizontal diffusion (DIFF2) takes care of this transfer. However, excessively high precipitation occurs in the autumn and winter seasons in comparison to the CHIRPS precipitation data.

A simulation with fourth order horizontal diffusion (DIFF4) reveals an even stronger precipitation bias. As an alternative to artificial diffusion, a 3D turbulence scheme has been implemented. A simulation with this scheme (TURB3D) removes this bias. Further evaluation of the results shows that the bias appears mainly for intermediate values in the frequency distribution and that the boundary layer moisture and, therefore, CAPE (convective available potential energy), are higher in the simulations with artificial horizontal diffusion. These results demonstrate that accurate treatment of 3D turbulent exchange is essential for improving convection-permitting simulations, and it will, therefore, be used for the km-scale climate projections within the WaRisCo project, which is part of the “Water Security in Africa – WASA” program.

How to cite: Frisius, T., Weber, T., Biskop, S., Ismail, M. F., and Engelbrecht, F.: The sensitivity of convective precipitation in South Africa to horizontal turbulent exchange in the km-scale regional climate model REMO-NH, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-18143, https://doi.org/10.5194/egusphere-egu26-18143, 2026.

15:05–15:15
|
EGU26-2553
|
ECS
|
On-site presentation
Mohammed Azharuddin, David Pritchard, and Hayley Fowler

We present a multi-site weather generator with a stochastic rainfall field generator (RFG) at its core. The weather generator is developed with the motive to produce downscaled projections for the future by utilizing the UKCP18 projections and a suite of climate models from the CMIP5/6 archive. The rainfall fields are sampled from the spatio-temporal Neyman-Scott Rectangular Pulse (NSRP) process. When considering a single site, the NSRP model parameterizes storm arrivals as a poisson process and storm separation time as exponential distribution. Each storm is assigned a certain number of raincells (a poisson random number) with each raincell having a duration and intensity which are exponentially distributed. For a multi-site model, additional considerations are made which include the radius of raincell parameterised by exponential distribution and the raincell density as a uniform poisson process (which is a replacement to the raincell generation process of single site model). The RFG has shown its efficacy in capturing the statistics of the observed rainfall across point and catchment scales which include mean monthly rainfall totals, daily variance, skewness, lag-1 autocorrelation, dry-day proportion and daily annual maximum in addition to capturing intergauge correlations. . Following the calibration and testing of the NSRP-based RFG, the other weather variables such as temperature and wind speed are ascertained through regression relationships by considering wet and dry transition states of rainfall. With the RFG established, climate model downscaling is performed by computing multiplicative and additive change factors for rainfall and temperature respectively. The RFG paramaters are perturbed by the computed change factor(s) to derive downscaled projections of precipitation thereby offering multiple plausible future scenarios in addition to a band of uncertainty associated with the projections. These projections can be further translated to hydrological responses by leveraging hydrological models thereby aiding in climate change impact assessment and adaptation.

How to cite: Azharuddin, M., Pritchard, D., and Fowler, H.: Change Factor Based Downscaling of Precipitation Through Neyman-Scott Rectangular Pulse based Rainfall Field Generators, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-2553, https://doi.org/10.5194/egusphere-egu26-2553, 2026.

15:15–15:25
|
EGU26-7985
|
ECS
|
On-site presentation
Eulàlia Busquets, Stefano Serafin, Mireia Udina, and Joan Bech

In numerical weather prediction models, microphysics schemes represent water vapor, cloud, and precipitation processes. These schemes rely on fixed parameters that are inherently uncertain or known to vary in space and time, such as the densities of snow and graupel. Inaccurate specification of these parameters leads to errors in the partitioning of surface precipitation into liquid and ice phases. To assess the sensitivity of model results to these parameters, in this study the Weather Research and Forecast (WRF) model version 4.5 was used to perform a set of idealized two-dimensional simulations of wintertime stable orographic precipitation. The design of the experiment was inspired by observations made on 25 and 26 October 2024 on the southern slope of the Pyrenees. The model configuration includes a mountain centered in the domain with a height of 1500 m and a half-width of 10 km, a horizontal grid spacing of 1 km, and 200 vertical levels. Microphysical processes are parameterized with the Thompson scheme, which is characterized by a special snow treatment that includes snow-size distribution dependence on ice water content and temperature, and a nonspherical shape of snow particles.

Model sensitivity was assessed by running-ensemble simulations, which were created by varying 6 empirical parameters of the microphysical scheme: the exponent a in the snow mass–size relation (aₘₛ), graupel density (ρg), the shape parameter of the gamma particle size distribution for rain (μr), snow (μs), and graupel (μg), and the coefficient controlling the conversion of rimed snow to graupel (rsg). Two sets of experiments were conducted. First, 6 single-parameter perturbation experiments were run, each one with 64 members. Second, a multi-parameter perturbation experiment with 1024 members in which all parameters were perturbed simultaneously. Preliminary results indicate that cloud and snow species exhibit the strongest response to single-parameter perturbations, with particularly high sensitivity to aₘₛ and μs. Specifically, increasing aₘₛ leads to snow at higher altitudes (5000–6000 m), while increasing μs lowers the melting layer to approximately 3000 m.

This research has been funded by projects ARTEMIS (PID2021-124253OB-I00), LIFE22-IPC-ES-LIFE PYRENEES4CLIMA and the Institute for Water Research (IdRA) of the University of Barcelona.

How to cite: Busquets, E., Serafin, S., Udina, M., and Bech, J.: Sensitivity of Microphysical Parameters in the Thompson Scheme Using Idealized WRF Simulations, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-7985, https://doi.org/10.5194/egusphere-egu26-7985, 2026.

15:25–15:35
|
EGU26-20546
|
ECS
|
On-site presentation
Jorge Sebastián Moraga, Nans Addor, Natalie Lord, and Chris Lucas

High-resolution climate projections are essential for hydrological and meteorological impact assessments, yet dynamical numerical simulations remain computationally prohibitive for large ensembles and domains. Generative AI, specifically Probabilistic Diffusion Models (DMs), offer a promising, computationally efficient alternative. Recently, these models have demonstrated skill in reproducing historical data and serving as efficient emulators of dynamical models. The question is, therefore, whether models trained on historical observations can infer the non-stationary statistics of future climate projections.

In this work, we downscale CESM2-LENS simulations over large domains using a DM trained on reanalysis data. We investigate the model's capability to bridge the scale gap between GCM outputs (~100 km resolution) and data requirements for local hydrological impact modelling (~10 km resolution) under both historical and end-of-century scenarios. Furthermore, we compare the diffusion-based approach with the outputs of the state-of-the-art WRF dynamical model, with a focus on the changes to key hydrometeorological indices. By benchmarking DM-downscaled data against both dynamically-downscaled data and GCM baselines, we aim to assess the trade-offs between computational efficiency and physical consistency, offering insights into the generalization limits of generative AI for climate change impact studies.

How to cite: Moraga, J. S., Addor, N., Lord, N., and Lucas, C.: Downscaling Precipitation Projections using Generative AI: Benchmarking against the WRF Dynamical Climate Model , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-20546, https://doi.org/10.5194/egusphere-egu26-20546, 2026.

15:35–15:45
|
EGU26-8527
|
On-site presentation
Xin Li

Sub-daily precipitation data are critical for continuous hydrological simulations in urban watersheds characterized by short concentration times. Unlike the widespread availability of daily precipitation data, sub-daily precipitation data are relatively limited due to the expensive monitoring instrumentation for long-term observation and high computational requirements for high-resolution simulations based on convection-permitting climate models. This study introduces a climate-informed hourly precipitation generator built on the method of fragments (MOF) framework, which accounts for both the thermodynamic influence of air temperature and the dynamic effects of atmospheric circulation patterns on sub-daily precipitation characteristics. The proposed hourly precipitation generator is extended to a multi-site configuration to preserve spatial dependencies in the simulated precipitation field. Application to stations across Germany demonstrates the effectiveness of the proposed hourly precipitation generator in reconstructing both the at-site statistical attributes and the inter-site spatial correlations. This approach provides an effective methodology for generating sub-daily precipitation inputs required for continuous hydrological modeling and flood risk assessments.

How to cite: Li, X.: A Climate-Informed Hourly Precipitation Generator Accounting for Thermodynamic and Dynamic Effects, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-8527, https://doi.org/10.5194/egusphere-egu26-8527, 2026.

Coffee break
Chairpersons: Dongkyun Kim, Roberto Deidda
16:15–16:20
16:20–16:30
|
EGU26-15328
|
On-site presentation
Ali Nazemi, Ramin Ahmadi, and Amin Hammad

Diagnosing precipitation type (ptype) is a major source of uncertainty in hydroclimatological applications. We propose a systematic framework for benchmarking the algorithms used for identifying ptype in numerical weather predictors and climate models. Six widely-used ptype algorithms, proposed by Derouin (1973), Cantin & Bachand (1993), Baldwin & Contorno (1993), Ramer (1993), Bourgouin (2000), and the European Centre for Medium-Range Weather Forecasts (ECMWF, 2024), are considered over a box region in north eastern North America with Montreal at its center. The benchmarking is made using hourly data collected at 25 Automated Surface Observing Systems during the period of 2007 to 2024. All ptype algorithms are fed by ERA5 single- and pressure-level climate reanalysis fields at 0.25° resolution. We consider four skills for benchmarking: (1) efficiency at the local scale, (2) temperature conditioning at the regional scale, as well as (3) spatial, and (4) spatiotemporal coherences. For assessing the efficiency at the local scale, we use three measures of precision, recall and F1-score that reveal how modeled ptypes are compared with observed ones at each station. For regional temperature conditioning, we extract probabilities of ptypes conditioned to near-surface temperature and compare the observed and modeled conditional density function using Kolmogorov–Smirnov test and the Wasserstein-1 (W1) distance. For both spatial and spatiotemporal coherences, we consider probabilities of co-occurrence and the Jaccard similarity index at the 0-hour time lag (spatial) and 1–48-hour lags (spatiotemporal) and quantify agreements between modeled and observed ptypes using F1-score. Our results show the excessive weakness of current ptypes algorithms in distinguishing rare and high impacts ptypes, such as freezing rain and ice pellets. Temperature conditioning show that rain, freezing rain, and ice pellets are frequently shifted toward colder regimes with W1 reaching up to 8.3 °C.  While rain classification shows moderate spatial realism, the skills in snow and freezing rain are substantially weaker. When temporal structure is added, the coherence is declined even further, with Bourgouin (2000) standing out among other algorithms with F1-score reaching to 0.5 for freezing rain and 0.61 for other/mixed types.  Our findings are a call for improving ptype algorithms in weather and climate models, particularly for predicting rare but high impact ptypes.

How to cite: Nazemi, A., Ahmadi, R., and Hammad, A.: A framework for benchmarking precipitation type classifiers used in weather and climate models , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-15328, https://doi.org/10.5194/egusphere-egu26-15328, 2026.

16:30–16:40
|
EGU26-561
|
ECS
|
On-site presentation
Debdut Sengupta and Sreeparvathy Vijay

Increasing anthropogenic activities in the post-industrial era, coupled with variability in natural forcings (e.g., solar radiation, volcanic eruption) and changes in geomorphological characteristics make the climate highly non-stationary in nature. This hinders effective climate projections, adaptation and mitigation strategies for extreme weather events, hydraulic structure planning, and irrigation activity. Regionalization, which is the process of demarcating regions of similar hydroclimatic characteristics, is therefore essential for water resources planning and management. However, there are no existing approaches which take into account the non-stationarity inherent in the hydroclimatic variables (e.g., precipitation, temperature, humidity, water level) during the process of regionalization. The most widely used feature based clustering techniques involve identifying key static attributes of the hydroclimatic time series to identify dominant patterns. However, these methods often fail to capture the temporal dynamics and evolving non-stationary characteristics of the climate variables, which is a major concern in the era of climate change. To address this research gap, this study integrates two major objectives - (a) develop a novel model based regionalization procedure that accounts for non-stationarity in the hydroclimatic time series, and (b) evaluate the performance of the proposed methodology against the existing regionalization approaches using a real world case study for the Indian subcontinent. 

By coupling the Latent Gaussian State Space Models (LGSSM) with advanced fuzzy ensemble clustering techniques, the proposed methodology aims to capture this inherent non-stationarity of the hydroclimatic data, yielding better domain informed homogeneous regions. Largely used in the field of data science for future data predictions and grouping; the LGSSM model is a parametric model with sufficient flexibility which can effectively describe the non-stationary climate variables in the Euclidean Space. Further, fuzzy ensemble clustering techniques aggregate results from multiple clustering realizations, mitigating the biases inherent in any single clustering approach and incorporate fuzzy set theory by assigning membership degrees to each study area grid. Cluster validity indices such as the Dunn Index and Davies-Bouldin Index are used to find the optimal number of clusters based on intra cluster compactness and inter cluster separation. 

Hydroclimatic datasets (eg., IMD data, ERA5 reanalysis data) are obtained at 0.25x0.25 degrees spatial and daily temporal frequency for the Indian subcontinent. The methodology identified K=10 and K=6 optimum number of clusters for precipitation and temperature respectively. Final homogeneous regions are delineated by integrating topographical features such as distance from sea, elevation etc. The identified major climate regions are - (a) Northern Cold Himalayan Zone, (b) Thar Desert Area, (c) Indo-Gangetic Plain, (d) Southern Peninsular Region, (e) Western Ghats Area and (f) Dry Semi-Arid Zone. These regions are validated using regional homogeneity tests such as HoskinWallish Test. This study is the first to integrate the advanced state space modeling with fuzzy ensemble clustering for climatic regionalization, making a paradigm shift in hydrology research, from solely relying on basin-scale boundaries to an integrated approach that considers both atmospheric and physiographic boundaries. This proposed methodology provides a ready to use powerful tool for homogeneous regionalization and future projections of complex non-stationary hydroclimatic variables.

How to cite: Sengupta, D. and Vijay, S.: A Novel Framework for Homogeneous Climate Regionalisation using Advanced State Space Modeling and Ensemble Fuzzy Clustering  , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-561, https://doi.org/10.5194/egusphere-egu26-561, 2026.

16:40–16:50
|
EGU26-21742
|
ECS
|
On-site presentation
Debi Prasad Bhuyan, Pankaj Upadhyaya, and Saroj Kanta Mishra

South Asia—home to more than a quarter of the global population—faces escalating climate risks that require scientifically credible and actionable climate information. Yet current global climate models exhibit persistent temperature and precipitation biases, reaching up to 25% and 100% of their mean values, respectively, which limits their utility for regional assessments and policy planning. To address these limitations, we develop the South Asia Regional Climate Information (SARCI) framework: a regionally optimized, process-informed system designed to improve simulations of the South Asian Summer Monsoon (SASM) and generate high-fidelity climate information.

SARCI features a customized atmospheric model based on NCAR CESM/CAM5, incorporating targeted enhancements to key physical parameterizations—stochastic entrainment for deep convection (STOCH), a dynamic convective adjustment timescale (DTAU), supplementary gravity-wave sources (GW), and region-specific similarity functions for land–air turbulent fluxes (LTF)—alongside structured parameter tuning and a statistical bias-correction and downscaling module. A systematic component-wise attribution quantifies the incremental influence of each enhancement. DTAU reduces precipitation biases and improves the annual cycle through better moisture convergence, cloud cover, and equatorial waves. STOCH and GW improve precipitation, circulation, and moisture distribution, with STOCH providing additional skill in equatorial waves. LTF primarily improves near-surface temperature with marginal precipitation benefits. Parameter tuning consolidates these gains and resolves residual inconsistencies, while the downscaling module corrects remaining magnitude errors and delivers quarter-degree, policy-relevant fields.

Together, these sequential improvements reduce longstanding SASM-related biases, yield more realistic regional circulation, and preserve acceptable global model performance. By clarifying the physical origins of model improvements and integrating co-production and regional optimization, the SARCI framework provides credible, actionable climate information for South Asia and offers a scalable pathway for other climate-vulnerable regions of the Global South.

How to cite: Bhuyan, D. P., Upadhyaya, P., and Mishra, S. K.: Process-Informed Regional Climate Modeling for South Asia: The SARCI Framework, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-21742, https://doi.org/10.5194/egusphere-egu26-21742, 2026.

16:50–17:00
|
EGU26-7614
|
ECS
|
On-site presentation
Janni Mosekær Nielsen, Michael Robdrup Rasmussen, Søren Thorndahl, Ida Kemppinen Vester, Malte Kristian Skovby Ahm, and Jesper Ellerbæk Nielsen

Weather radar nowcasting is a crucial technique in real-time urban hydrological applications, as weather radars provide spatially distributed rainfall measurements. Uncertainties in weather radar nowcasting stemming from errors in rainfall observations, motion field estimates, and rainfall evolution predictions are, however, inevitable. In this study, we implement a well-established deep learning model within computer science and image processing to estimate weather radar motion fields for nowcasting.

The deep learning model, Recurrent All-Pairs Field Transform (RAFT), developed by Teed and Deng (2020), is demonstrated to outperform several existing deep learning models for optical flow estimation. The RAFT model consists of a feature encoder that extracts features from consecutive images, a correlation layer that computes visual similarities, and a recurrent unit that iteratively updates the estimated flow field. The method is computationally efficient and highly accurate, making it relevant in real-time applications. Due to the similarities between image processing and weather radar rainfall nowcasting, the method has the potential to produce accurate motion fields for extrapolating weather radar rainfall.

In this study, three years of observation data from a Danish C-band weather radar are used to nowcast 51 rainfall events. The rainfall events consist of both linear and non-linear rainfall pattern motions. We systematically compare weather radar rainfall forecasted with Lagrangian persistence using six different motion field approaches: Global vector, COTREC (Li et al., 1995), VET (Variational Echo Tracking; Germann and Zawadski, 2002), Lucas-Kanade (Lucas and Kanade, 1981), DARTS (Dynamic and Adaptive Radar Tracking of Storms; Ruzanski et al., 2011), and RAFT.

The optical flow with RAFT is shown to statistically perform as well as the well-established methods VET and Lucas-Kanade and to outperform the global vector, COTREC, and DARTS. It is demonstrated that RAFT produces accurate and robust motion fields for both linear and non-linear rainfall motion. Thus, the RAFT model for optical flow estimation is shown to be highly relevant for weather radar nowcasting in urban hydrological applications.

References:

Germann, U., Zawadzki, I., 2002. Scale-Dependence of the Predictability of Precipitation from Continental Radar Images. Part I: Description of the Methodology. Mon Weather Rev 130, 2859–2873. https://doi.org/10.1175/1520-0493(2002)130<2859:SDOTPO>2.0.CO;2

Li, L., Schmid, W., Joss, J., 1995. Nowcasting of Motion and Growth of Precipitation with Radar over a Complex Orography. J Appl Meteorol Climatol 34, 1286–1300. https://doi.org/10.1175/1520-0450(1995)034<1286:NOMAGO>2.0.CO;2

Lucas, B.D., Kanade, T., 1981. An iterative image registration technique with an application to stereo vision, in: IJCAI’81: 7th International Joint Conference on Artificial Intelligence. pp. 674–679

Ruzanski, E., Chandrasekar, V., Wang, Y., 2011. The CASA nowcasting system. J Atmos Ocean Technol 28, 640–655. https://doi.org/10.1175/2011JTECHA1496.1

Teed, Z., Deng, J., 2020. Raft: Recurrent all-pairs field transforms for optical flow, in: European Conference on Computer Vision. pp. 402–419

How to cite: Nielsen, J. M., Rasmussen, M. R., Thorndahl, S., Vester, I. K., Ahm, M. K. S., and Nielsen, J. E.: Optical Flow with Recurrent All-Pairs Field Transform (RAFT) for weather radar nowcasting, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-7614, https://doi.org/10.5194/egusphere-egu26-7614, 2026.

17:00–17:10
|
EGU26-504
|
ECS
|
On-site presentation
Vaibhav Tyagi and Saurabh Das

Accurate precipitation estimates depend critically on the calibration fidelity of ground-based Doppler Weather Radar (DWR) systems. While these radars provide high-resolution observations essential for hydrological modelling and forecasting, their measurements often suffer from bias due to radar constant drift. Conventional calibration approaches, such as using metallic spheres, are operationally demanding and poorly maintained. As a result, biases in reflectivity can propagate, thereby degrading quantitative precipitation estimation (QPE) and introducing uncertainty into downstream applications.

This study develops a correction strategy that utilizes the well-calibrated reflectivity measurements from satellite radar (SR) to account for the systematic underestimation in ground radar (GR) measurements. A machine-learning approach based on the XGBoost algorithm is used to model the bias between GR and SR reflectivity along with key radar-geometric parameters, including range, elevation angle, and azimuth, to capture the spatial heterogeneity. The proposed framework is evaluated using eight years (2017-2024) of collocated observations from the C-band DWR at the Thumba Equatorial Rocket Launching Station (TERLS), Thiruvananthapuram, India. The proposed correction framework significantly enhances consistency between GR and SR observations. The correlation coefficient increases from 0.23 to 0.88 with a marked reduction in mean bias, mean absolute error and root mean squared error. The results demonstrate the potential of space-ground radar synergy to mitigate calibration-driven uncertainties and strengthen the reliability of near-real-time precipitation products. This framework offers a scalable pathway for enhancing operational QPE and for supporting climate-scale radar reflectivity reanalysis where long-term consistency is essential.

How to cite: Tyagi, V. and Das, S.: Correction of Systematic Calibration Drift in Weather Radar Observations to Improve Precipitation Uncertainty Modelling, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-504, https://doi.org/10.5194/egusphere-egu26-504, 2026.

17:10–17:20
|
EGU26-20641
|
ECS
|
Virtual presentation
Chamal Perera, Nadee Peiris, Lalith Rajapakse, Nimal Wijayaratna, and Ajith Wijemannage

Long-term, accurate fine-scale precipitation estimates are essential for hydrological and climate-related analyses, particularly in regions characterized by strong spatial rainfall variability. This study introduces SLRainGrid-D05, the first high-resolution gridded daily precipitation dataset for Sri Lanka, developed at a spatial resolution of 0.05°×0.05° and covering the entire country, including the wet, intermediate, and dry climatic zones. Sri Lanka’s tropical climate exhibits pronounced spatial variability in annual rainfall, ranging from approximately 900 mm to 5,500 mm, which cannot be adequately captured by the sparsely distributed rain-gauge network alone. In addition, satellite-based precipitation products (SPPs) are known to exhibit considerable biases over the region.

To address these limitations, a spatially consistent gridded precipitation dataset was developed by merging ground-based observations with SPPs. An initial evaluation of two widely used SPPs, IMERG and CHIRPS, demonstrated that IMERG performs better at the daily time scale, while CHIRPS shows superior performance at monthly scale. Based on these findings, daily IMERG precipitation was downscaled from its native 0.1°×0.1° resolution to 0.05°×0.05° using CHIRPS rainfall as spatial reference information. The downscaled IMERG product was subsequently merged with rain-gauge observations using machine-learning-based approaches.

The study introduces a novel hybrid merging framework that integrates graph neural networks (GNN) with inverse distance weighting (IDW) to explicitly account for the spatial autocorrelation of rainfall. The proposed method was benchmarked against conventional machine-learning models, including random forest, extreme gradient boosting, support vector machines, and artificial neural networks. Results indicate that the hybrid GNN-IDW framework consistently outperforms these benchmark methods in both rainfall detection and magnitude estimation. Specifically, it achieved the highest probability of detection (0.97) and reduced root mean square error (RMSE) and mean absolute error (MAE) by 13-41% and 9-36%, respectively, relative to the original SPPs. The SLRainGrid-D05 dataset offers a reliable, high-resolution precipitation product and represents a valuable resource for hydrological modeling, climate analysis, and improved preparedness for hydrological extremes, supporting water resources assessment and management across Sri Lanka, with the proposed methodology also being transferable to other tropical regions.

How to cite: Perera, C., Peiris, N., Rajapakse, L., Wijayaratna, N., and Wijemannage, A.: SLRainGrid-D05: High-Resolution Daily Precipitation Dataset for Sri Lanka Derived from Machine Learning and Satellite-Gauge Fusion, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-20641, https://doi.org/10.5194/egusphere-egu26-20641, 2026.

17:20–17:30
|
EGU26-21546
|
ECS
|
Virtual presentation
Abir Naceur, Hamouda Dakhlaoui, Giovanni Battista Chirico, and Anna Pelosi

Using a two-stage evaluation framework, this study evaluates five near-real-time (NRT) satellite precipitation products (GPM-IMERG V07, GSMAP V06, GSMAP V07, GSMAP V08, PERSSIAN PDIR NOW) over northern Tunisia. The evaluation is conducted at hourly temporal resolution using complementary point-to-pixel statistical analyses and hydrological modelling experiments.

The first stage consists of a comprehensive statistical assessment based on continuous, categorical, and event-based verification metrics. While continuous and categorical approaches have been widely used in previous studies, event-based evaluation methods have been applied far less frequently; their joint use in this study therefore provides a more comprehensive and complementary assessment of NRT precipitation products. 

The second stage involves a rainfall–runoff model to investigate how errors in satellite-derived precipitation propagate through the hydrological system and affect simulated streamflow.

Continuous metrics highlight considerable differences in performance among the five products. GSMaP-V8 and GPM-IMERG demonstrate the most consistent with gauge observations, followed by GSMaP-V6, with Pearson correlation coefficients (PCC) ranging from 0.32 to 0.35 and RMSE values below 0.20 mm. By contrast, GSMaP-V7 shows lower performance. PERSIANN-PDIR-NOW systematically exhibits the weakest accuracy, characterized by low correlation and large error magnitudes.

Categorical verification validates that GPM-IMERG presents the highest rainfall detection capability, achieving probability of detection (POD) values exceeding 0.45 and critical success index (CSI) values above 0.23 for light and moderate rainfall thresholds. Conversely, PERSIANN-PDIR-NOW suffers from frequent false alarms, contributing to decreased categorical skill.

Event-based analyses reveal a general tendency of satellite products to overestimate rainfall event frequency and peak characteristics. GSMaP-V8 exhibits the most balanced and consistent overall performance. GPM-IMERG and GSMaP-V6 better reproduce mean event intensity. GSMaP-V7, however, systematically overestimates event depth, intensity, and peak timing. Moreover, PERSIANN-PDIR-NOW underestimates the mean event precipitation rate, accompanied by a peak rainfall timing shifted earlier relative to observations.

The hydrological evaluation shows that rainfall–runoff modeling propagates precipitation uncertainties non-linearly into simulated streamflow. GPM-IMERG, GSMAP-V7 and GSMAP-V6 yield the most realistic flow simulations (KGE up to 0.68), Other products with comparable rainfall-level statistics nonetheless generate biased streamflow responses

Overall, the findings provide relevant information for improving NRT satellite precipitation algorithms and offer practical guidance for Community stakeholders and practitioners in selecting suitable alternative precipitation datasets in hydrological applications across specific basins, regions, or climatic zones.

 

Keywords: Hourly rainfall, Near-real-time satellite precipitation products, GPM-IMERG V07, GSMAP V06, GSMAP V07, GSMAP V08, PERSSIAN PDIR NOW, Northern Tunisia

How to cite: Naceur, A., Dakhlaoui, H., Chirico, G. B., and Pelosi, A.: Benchmarking High-Resolution Quasi–Real-Time Satellite Precipitation Products over Northern Tunisia, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-21546, https://doi.org/10.5194/egusphere-egu26-21546, 2026.

17:30–17:40
|
EGU26-5978
|
ECS
|
On-site presentation
Shivam Singh, Simon M. Papalexiou, Hebatallah M. Abdelmoaty, Tom Hartvigsen, and Antonios Mamalakis

High-resolution precipitation information is essential for hydrological impact assessment, flood risk analysis, and the characterization of extreme events, yet climate and weather model outputs are typically available at spatial resolutions too coarse to resolve fine-scale variability. Deep-learning-based statistical downscaling has emerged as an effective approach for bridging this resolution gap; however, models trained with pixel-wise objectives often suppress spatial variability and underestimate extremes. Adversarial learning has been shown to improve the realism of downscaled precipitation fields, particularly for extreme events, but the mechanisms through which adversarial objectives influence model behavior remain insufficiently understood. In this study, we investigate how adversarial training modifies the internal representation of precipitation extremes within a super-resolution downscaling framework, using explainable artificial intelligence (XAI) as a diagnostic tool. We employ a unified U-Net architecture trained under two optimization strategies: (i) a deterministic formulation using a pixel-wise mean-squared-error loss, and (ii) an adversarial formulation in which the same U-Net generator is trained jointly with a critic through an adversarial loss. This controlled design isolates the effects of adversarial learning while holding architecture and input information constant. XAI techniques are applied to analyze differences in spatial sensitivity and attribution patterns between the two training regimes, with particular emphasis on extreme precipitation events. Rather than serving as a performance metric, XAI is used to interrogate how adversarial training reshapes the model’s reliance on spatial structure and localized variability. This work highlights the potential of XAI to provide mechanistic insight into generative downscaling models and to support more transparent evaluation of adversarial approaches for extreme precipitation.

How to cite: Singh, S., Papalexiou, S. M., Abdelmoaty, H. M., Hartvigsen, T., and Mamalakis, A.: Understanding the Role of Adversarial Learning in Precipitation Super-Resolution Through Explainable AI, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-5978, https://doi.org/10.5194/egusphere-egu26-5978, 2026.

17:40–17:50
|
EGU26-755
|
ECS
|
On-site presentation
Suresh Devaraj and Pooja Shree Shanmugam

Mountainous areas and the hill stations, which were traditionally considered cooler and with stable climatic conditions, are proving to mirror certain warming and changes in rainfall patterns. Considering the broader context of global climate change, this study investigates the presence of statistically quantifiable climatic shifts in the hill stations of South India by integrating observed IMD datasets with CMIP6 model simulations. An extensive bias-correction framework was employed to analyse and address the substantial systematic errors commonly associated with applying global climate models to complex terrain. The study combines established bias-correction techniques, including Quantile Mapping (QM) and Quantile Delta Mapping (QDM), with advanced machine learning algorithms such as CART, XGBoost, and a stacked ensemble model, enabling a more robust and comprehensive correction of model biases. XGBoost and the stacked model were the only approaches that demonstrated substantial improvements, showing reduced RMSE (0.55–0.76 for temperature and approximately 83–85 mm for precipitation), near-zero bias, and strong predictive skill (R² = 0.96 for temperature and NSE = 0.71 for precipitation). These models also achieved the lowest prediction uncertainty (RMSE) and the highest overall predictive performance (R²). The bias-corrected projections reveal pronounced warming across all the hill stations examined, aligning with recent evidence that traditionally cool regions are experiencing increased heat exposure. Rainfall forecasts indicate greater variability, suggesting a potential rise in both heavy rainfall events and prolonged dry spells. These findings strongly support the emerging understanding that the hill stations of South India are transitioning toward warmer and more climate-sensitive conditions. The study provides high-resolution, bias-adjusted datasets essential for climate impact assessments, tourism planning, ecosystem management, and the development of targeted adaptation policies to safeguard these vulnerable high-elevation environments.

How to cite: Devaraj, S. and Shanmugam, P. S.: Machine Learning–Enhanced Bias Correction of CMIP6 Data for Detecting Warming and Rainfall Shifts in Indian Hill Stations, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-755, https://doi.org/10.5194/egusphere-egu26-755, 2026.

17:50–18:00
|
EGU26-1833
|
ECS
|
On-site presentation
Huijie Li and Jie Chen

Accurate precipitation estimation is of vital importance for hydrological simulation and water resources management. However, large uncertainties existed in  precipitation datasets in high-alpine regions due to the scare gauged observations and complex terrains. Data fusion technologies are widely applied to integrate advantages of multi-source precipitation datasets, but the spatial information of precipitation is usually negelected. To overcome this limitation, this study developed a two-step machine learning framework for merging multi-source precipitation datasets based on the 2D convolutional neural network (CNN) incorporating Neighboring spatial information, hereafter referred to as nCNN. The framework employs a hybrid classification-regression model to merge three gridded precipitation products (i.e., ERA5-Land, TPReanalysis and GPM) and gauged observations over a high alpine watershed in China during the period 2001-2019. Two merged precipitation datasets were generated by CNN and the proposed nCNN framework, respectively. The results show that the proposed framework effectively integrates the advantages of multiple datasets. The CNN and nCNN merged precipitation datasets have similar spatial distribution with the original products but differ in precipitation amounts. Precipitation amounts of merged data are much closer to gauged observations than original precipitation products. Both merged datasets outperform original products in terms of statistical and categorical indices evaluated based on 25 independently meteorological stations with complete time period (covering 2001-2019). However, the nCNN merged dataset exhibits superior performance over the CNN merged dataset in capturing precipitation amounts and detecting precipitation event, especially for moderate (5~10 mm/d) and heavy precipitation (>10 mm/d). Compared with the CNN merged result, the nCNN framework reduces the station-averaged root mean square error (RMSE) from 4.25 mm/d to 3.74 mm/d for moderate precipitation and from 9.43 mm/d to 8.57 mm/d for heavy precipitation, while increasing the station-averaged critical success index (CSI) by 0.03 and 0.04, respectively. Overall, this study highlights the importance of incorporating spatial information in precipitation merging, especially for high-alpine regions. 

How to cite: Li, H. and Chen, J.: A two-step machine learning framework for incorporating spatial information into multi-source precipitation merging over high-alpine regions, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1833, https://doi.org/10.5194/egusphere-egu26-1833, 2026.

Posters on site: Tue, 5 May, 08:30–10:15 | Hall A

The posters scheduled for on-site presentation are only visible in the poster hall in Vienna. If authors uploaded their presentation files, these files are linked from the abstracts below.
Display time: Tue, 5 May, 08:30–12:30
Chairpersons: Giuseppe Mascaro, Alin Andrei Carsteanu
A.95
|
EGU26-246
|
ECS
Athanasios Serafeim, Andreas Langousis, Francesco Viola, Dario Pumo, Nunzio Romano, Paolo Nasta, and Roberto Deidda

Accurate and robust estimation of soil loss is essential in Mediterranean basins, where sediment transfer rates exhibit pronounced seasonal aspects driven by high-intensity storm events. While the Revised Universal Soil Loss Equation (RUSLE) is the most widely used tool for assessing soil loss, its accuracy is highly dependent on the rainfall erosivity (R-factor). This study evaluates the effect of different R-factor quantification approaches on soil loss estimates within the Tirso River basin, Sardinia’s largest basin (> 3000 km²), which provides water resources for agriculture, hydropower, and domestic supply.

We applied the RUSLE method within a geographic information system (GIS) framework. The key factors for soil erodibility (K), topography (LS), land cover-management (C), and conservation practices (P) were derived from established sources, including the European Soil Data Center, a high-resolution Copernicus DEM, the Copernicus Global Land Service, and local authorities. To estimate the R-factor, we used high-resolution (10-minute resolution) precipitation data from more than 40 rainfall gauges, applying two distinct storm identification approaches: Renard et al. (1997) and the recently developed Serafeim et al. (2025). The soil loss estimates obtained from these high-resolution methods were then compared against results derived from a suite of widely applied empirical erosivity models calibrated in Mediterranean regions. This comparative analysis reveals how relying on generalized erosivity equations can distort soil erosion assessments at the basin level.

Keywords
Soil erosion; RUSLE; rainfall erosivity uncertainty; high-resolution precipitation; sediment yield; watershed management

References

Renard, K.G., Foster, G.R., Weesies, G.A., McCool, D.K., Yoder, D.C., 1997. Predicting Soil Erosion by Water: A Guide to Conservation Planning With the Revised Universal Soil Loss Equation (RUSLE). USA, U.S, Department of Agriculture, Washington, DC.

Serafeim, A.V., R. Deidda, A. Langousis, et al., (2025) A Critical Review of Rainfall Erosivity Estimation Approaches: Comparative Analysis and Temporal Resolution Effects (To be submitted).

How to cite: Serafeim, A., Langousis, A., Viola, F., Pumo, D., Romano, N., Nasta, P., and Deidda, R.: Rainfall Erosivity Estimation Accuracy and Its Impact on Soil Loss Assessments: A Case Study in Southern Italy , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-246, https://doi.org/10.5194/egusphere-egu26-246, 2026.

A.96
|
EGU26-12352
Alin Andrei Carsteanu, Stergios Emmanouil, Roberto Deidda, Anastasios Perdios, César Aguilar-Flores, and Andreas Langousis

Being the most widely used generators of multifractal measures, multiplicative cascade models have been extensively applied in the field of geophysics, and particularly in hydrometeorology. As in any modeling effort, solving the "inverse problem" is essential, and in this case, it can be described as finding the appropriate cascade model that generates a given multifractal measure. Direct measurement of a generated field (e.g., a rainfall field, or a time series thereof) results in an immediate decomposition into breakdown coefficients,  producing a microcanonical (strictly normalized) multiplicative cascade over a limited range of scales. Yet, the canonical (expectation-normalized) phenomenology at underlying scales may generate statistical properties that are non-trivial to reproduce. The present work analyzes such properties for the simplified case of a one-dimensional, beta-lognormal discrete multiplicative cascade.

How to cite: Carsteanu, A. A., Emmanouil, S., Deidda, R., Perdios, A., Aguilar-Flores, C., and Langousis, A.: On the limitations of interchangeability between canonical and microcanonical multiplicative cascade models, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-12352, https://doi.org/10.5194/egusphere-egu26-12352, 2026.

A.97
|
EGU26-4929
Nikolaos Malamos, Theano Iliopoulou, Panagiotis D. Oikonomou, and Demetris Koutsoyiannis

Rainfall regionalization refers to a broader spatial modeling process that transforms point measurements into reliable continuous fields, incorporating additional information.  Yet the fidelity of the resulting continuous surface is strongly influenced by the quality of the underlying data, as well as by the density and spatial configuration of the observational network. This contribution addresses the question of how reliable rainfall data are when evaluated against a regionalized rainfall surface, by extending the Bilinear Surface Smoothing with Explanatory variable (BSSE) framework to explicitly incorporate Bayesian credible intervals.

The proposed formulation exploits the linear smoother representation of BSSE to derive the posterior covariance of the fitted bilinear surface as a function of residual variance and effective degrees of freedom. Credible intervals are obtained analytically, allowing uncertainty in variance estimation to be accounted for without resampling. Beyond quantifying uncertainty in the spatial estimates, the credible intervals provide a diagnostic measure of data reliability relative to the regionalized signal.

The extended framework is demonstrated through the regionalization of average and extreme rainfall characteristics across Greece, using ground-based observations together with elevation as explanatory variable. Stations falling outside the 95% credible interval are identified and examined, revealing that such cases frequently occur in areas with sparse gauge coverage or complex rainfall regimes. These locations highlight regions where the observational network provides limited support to the regionalized surface, leading to increased uncertainty and reduced confidence in the available data.

The analysis further reveals a strong dependence of uncertainty on temporal aggregation scale, with markedly wider credible intervals at sub-daily extremes, where station density is lowest. The BSSE methodology is implemented in a fully reproducible workflow, facilitating straightforward application of the proposed uncertainty-aware regionalization framework to other hydro-climatic datasets.

How to cite: Malamos, N., Iliopoulou, T., Oikonomou, P. D., and Koutsoyiannis, D.: How Reliable are Rainfall Observations? Assessing Credible Intervals with Bilinear Surface Smoothing, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-4929, https://doi.org/10.5194/egusphere-egu26-4929, 2026.

A.98
|
EGU26-9981
|
ECS
Jiseong Lim, Yong Oh Lee, and Dongkyun Kim

In the field of precipitation nowcasting, the application and advancement of deep learning techniques have enabled resource-efficient predictions. In particular, U-Net variants and attention-based architectures achieve computational reduction by extracting features with wide receptive fields through downsampling and upsampling processes. However, upsampling methods can induce checkerboard artifacts when spatially adjacent pixels in high-resolution feature maps are computed from different low-resolution pixels, resulting in overlooked dependencies compared to those derived from identical pixels. This leads to discrepancies with the ground truth patterns, ultimately degrading the performance of prediction models. This paper introduces upsampling techniques known to prevent checkerboard artifacts in the super-resolution domain into precipitation prediction models, aiming to improve performance while minimizing increases in model complexity. At the upsampling stage, we incorporate sub-pixel convolution or decouple the upsampling and channel reduction processes, comparing performance against models using transposed convolution, the standard upsampling approach in U-Net. Additionally, the Checkerboard Artifacts Score (CAS) is proposed to quantify the degree of checkerboard artifacts in images, which is applied to each model for analysis. CAS is defined as the ratio of errors between pixels forming artifact boundaries to errors between all adjacent pixels. In experiments, sub-pixel convolution and the combination of nearest neighbor or bilinear interpolation with subsequent convolution record lower CAS values than transposed convolution, while also demonstrating improved performance across metrics including NSE, CSI, and RMSE. Notably, sub-pixel convolution exhibits pronounced performance with balanced POD and FAR, while the bilinear approach generates spatially natural patterns with competitive performance. Analysis of the experimental results suggests that the reduction of checkerboard artifacts contributes to performance improvement. Furthermore, this work highlights the importance of upsampling method selection in video prediction tasks and provides practical guidance for model design.

How to cite: Lim, J., Lee, Y. O., and Kim, D.: Mitigating Checkerboard Artifacts for Enhanced Precipitation Nowcasting: A Comparison of Upsampling Techniques, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-9981, https://doi.org/10.5194/egusphere-egu26-9981, 2026.

A.99
|
EGU26-10004
|
ECS
Petr Vohnicky, Eleonora Dallan, Francesco Marra, and Marco Borga

Convection-permitting models (CPMs) better represent sub-daily precipitation than coarser models, but they still exhibit substantial biases in low probability occurrence extremes, with elevation-dependent patterns. In addition, the relatively short simulation periods, typically around 10 years, limit the robust estimation of rare events. This constrains the direct use of raw CPM output for applications that depend on extreme-value statistics. To address these limitations, this study introduces a hybrid bias-correction framework for CPM precipitation that targets hourly resolution.

The proposed method combines non-parametric and parametric components within an elevation-based pooling strategy. Stations and co-located CPM grid cells are grouped into elevation bands, and a common, monthly varying correction is estimated for each band to represent both spatial and seasonal variability. Low-to-moderate precipitation intensities are corrected using robust empirical quantile mapping. The upper tail is adjusted using an optimized Weibull tail model with left censoring, inspired by the Simplified Metastatistical Extreme Value approach. The optimal threshold is searched within the 0.8 to 0.97 quantile range using an adjusted Weibull tail test.

Model performance is evaluated using both extreme-value and distributional metrics derived from observations, raw CPM output, and bias-corrected series. Extreme behavior is assessed through 20-year return levels of 1-hour and 24-hour precipitation. Distributional performance is quantified using mean absolute bias computed over empirical quantiles, allowing improvements to be tracked across the full range of precipitation intensities.
Robustness is examined through a structured validation framework. Spatial robustness is tested by evaluating the elevation-based pooling approach using k-fold schemes in which subsets of stations are withheld from calibration. Temporal robustness is assessed through repeated cross-validation on the 10-year CPM slices, with six years randomly assigned to calibration and four years to validation.

Preliminary results show a reduction in mean absolute bias after correction, largely driven by an improved representation of the wet-hour ratio. When a minimum rainfall threshold is applied to the raw CPM data, the bias becomes comparable to that of the bias-corrected output, indicating that drizzle remains a key issue. For extremes, biases in 1-hour 20-year return levels generally decrease but are not fully eliminated, reflecting the large uncertainty in the distribution upper tail. For 24-hour 20-year return levels, results are mixed: biases are reduced for some CPMs but introduced or amplified for others, highlighting model-specific differences in the spatial characteristics of storm structure and organization. The validation indicates that the elevation-based pooling yields spatially robust corrections for sufficiently small, climatically homogeneous domains, while the assessment of temporal robustness remains inconclusive due to the limited length of the available 10-year CPM simulations.

How to cite: Vohnicky, P., Dallan, E., Marra, F., and Borga, M.: A Hybrid Bias-Correction Framework for Extreme Precipitation in Convection-Permitting Models, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-10004, https://doi.org/10.5194/egusphere-egu26-10004, 2026.

A.100
|
EGU26-1995
Xenofon Soulis, Karampetsa Evaggelia, Konstantinos Soulis, Stergia Palli Gravani, Evaggelos Nikitakis, and Dionissios Kalivas

Accurate meteorological forcing is a prerequisite for reliable hydrological modelling, particularly in regions with complex topography like Greece. Global reanalysis datasets offer continuous coverage but often fail to capture local orographic effects when downscaled using standard, constant lapse rates. This study investigates the spatial and temporal variability of precipitation and temperature gradients across Greece and evaluates their application in calibrating reanalysis data.

We utilized a hybrid dataset comprising long-term records from 140 meteorological stations and a dense network of 777 stations for the year 2023. To process this data, we developed a specialized Python-based algorithm to estimate lapse rates and the Coefficient of Determination ($R^2$) dynamically across the domain. The methodology utilizes a "moving-window" approach, where the window dimensions and moving step were first optimized by maximizing the determination coefficient ($R^2$) to ensure statistical robustness. Using these optimized parameters, we estimated the lapse rate and $R^2$ at each grid point of the study area. Subsequently, spatial interpolations were generated to create continuous maps of vertical gradients and their statistical reliability.

The resulting spatial patterns were analyzed in relation to the country’s distinct geomorphology, including the complex coastline, the orientation of major mountain ranges (Pindos), and the insular environments. The analysis revealed that while temperature lapse rates exhibit high spatial coherence and predictability, precipitation gradients are highly sensitive to local topographic features and continentality.

These empirically derived, spatially explicit lapse rates were applied to downscale and bias-correct AgERA5 temperature and precipitation fields for the DT-Agro Digital Twin. The proposed methodology significantly reduced biases in mountainous and coastal zones compared to standard interpolation methods, demonstrating that geomorphologically informed, dynamic gradient estimation is critical for effective model calibration in data-scarce, complex terrains.

How to cite: Soulis, X., Evaggelia, K., Soulis, K., Palli Gravani, S., Nikitakis, E., and Kalivas, D.: Investigation of the Spatial and Temporal Variability of the Precipitation and Temperature Lapse Rates in Greece and its Application in Evaluation and Calibration of Metanalysis Meteorological Data, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1995, https://doi.org/10.5194/egusphere-egu26-1995, 2026.

A.101
|
EGU26-3612
|
ECS
Beatrice Carlini, Simon Michael Papalexiou, Gianluca Botter, and Francesco Marra

Predicting the impacts of climate change on hydroclimatic processes in small mountainous catchments requires long and realistic high-temporal-resolution simulations of key environmental variables, particularly precipitation, under future scenarios. Stochastic models provide an effective way to generate multi-decadal projections, but existing approaches struggle to reproduce the alternation of weather systems and sub-hourly extremes. We propose a stochastic framework that accurately describes both ordinary and extreme precipitation events, explicitly links intermittency with event inter-arrival characteristics, and represents different storm types (e.g., convective and stratiform). Our approach combines CoSMoS, which generates stochastic time series preserving probability distributions and correlation structures, with concepts from TENAX, which relates the occurrence frequency and the probability distribution of extreme precipitation to near-surface temperature. Climate change impacts are incorporated through projected changes in temperature distributions and large-scale weather patterns from regional climate models. The method is tested on the Rio Valfredda, a small Alpine catchment in the eastern Italian Alps. The sub-hourly resolution of the framework allows explicit representation of convective precipitation, a key driver of extreme events in Alpine environments.

How to cite: Carlini, B., Papalexiou, S. M., Botter, G., and Marra, F.: A stochastic approach for the continuous simulation of ordinary and extreme precipitation in Alpine environments, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3612, https://doi.org/10.5194/egusphere-egu26-3612, 2026.

A.102
|
EGU26-6160
Soobin Cho, Sangbeom Jang, Jiyeon Park, and Ju-young Shin

Recent climate change has been linked to more frequent and more intense short-timescale rainfall extremes, increasing exposure to urban pluvial flooding. Because many urban catchments respond within minutes, rainfall information at sub-hourly resolution is often needed for hydrologic analyses. An AI-driven temporal downscaling approach is introduced here to derive 10-minute rainfall series from hourly observations using a conditional diffusion generative model. Rain-gauge observations at Seoul Gwanaksan (#1917), operated by the Korea Forest Service, were used. The record covers the years 2015 through 2024. Paired hourly totals and observed 10-minute series were prepared to examine whether sub-hourly rainfall sequences can be reconstructed from hourly totals while preserving realistic within-hour variability. The feasibility of loss function variation was investigated. The experiments indicate that incorporating distributional and temporal statistics into the objective function can enhance the realism of sub-hourly rainfall structure under hourly constraints. The proposed framework is expected to provide more reliable 10-minute rainfall inputs for urban hydrologic analyses and pluvial-flood–relevant applications in rapid-response catchments.

How to cite: Cho, S., Jang, S., Park, J., and Shin, J.: Temporal Downscaling Using Deep Learning for Sub-hourly Time Series, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-6160, https://doi.org/10.5194/egusphere-egu26-6160, 2026.

A.103
|
EGU26-6400
Zied Ben Bouallègue, Ana Prieto-Nemesio, Angela Iza Wong, Florian Pinault, Marlies van der Schee, and Umberto Modigliani

SEEPS4ALL [1] combines a precipitation dataset in a Zarr format and a set of verification Jupyter Notebooks for the evaluation of daily precipitation forecasts over Europe. The dataset is primarily based on daily in-situ observations from the European Climate Assessment & Dataset project (www.ecad.eu). Climate statistics are derived from long time series at each station location to enable the computation of meaningful verification metrics. For example, the Stable and Equitable Error in Probability Space (SEEPS [2]) is a score specifically designed to assess the performance of precipitation forecasts, and it requires climate statistics.

The verification notebooks showcase the computation not only of SEEPS but also of the diagonal score (the equivalent of SEEPS for probabilistic forecasts) and of the brier score as a function of climate percentiles. Finally, when comparing a gridded forecast and a point observation, one can account for observation representativeness uncertainty by dressing the forecast with pre-defined scale-dependent parametric distributions [3]. In a nutshell, SEEPS4ALL helps promote the benchmarking of daily precipitation forecasts against in-situ observations over Europe.

 

[1] Ben Bouallègue Z, A. Prieto-Nemesio, A.I. Wong, F. Pinault, M. van der Schee, and U. Modigliani (2025), SEEPS4ALL: an open dataset for the verification of daily precipitation forecasts using station climate statistics. Earth System Science Data, https://doi.org/10.5194/essd-2025-553

[2] Rodwell, M.J., D.S. Richardson, T.D. Hewson and T. Haiden (2010), A new equitable score suitable for verifying precipitation in numerical weather prediction. Q.J.R. Meteorol. Soc., https://doi.org/10.1002/qj.656

[3] Ben Bouallègue, Z., T. Haiden, N. J. Weber, T. M. Hamill, and D. S. Richardson (2020), Accounting for Representativeness in the Verification of Ensemble Precipitation Forecasts. Mon. Wea. Rev., https://doi.org/10.1175/MWR-D-19-0323.1

How to cite: Ben Bouallègue, Z., Prieto-Nemesio, A., Wong, A. I., Pinault, F., van der Schee, M., and Modigliani, U.: SEEPS4ALL: all you need to compute SEEPS (and more) when evaluating daily precipitation forecasts over Europe , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-6400, https://doi.org/10.5194/egusphere-egu26-6400, 2026.

A.104
|
EGU26-7647
|
ECS
Midhuna Thayyil Mandodi, Caroline Arnold, Keil Paul, David Greenberg, Beate Geyer, and Stefan Hagemann

 The availability of high temporal resolution precipitation data is essential for understanding sub‑hourly hydrometeorological processes, extreme rainfall, and their impacts on hydrology and urban flooding. Especially with respect to climate change where precipitation extremes are expected to enlarge a profound data base is needed as an ensemble of downscaled climate scenarios. To store meteorological fields with high resolution in time and space is very resource demanding. The standard EURO-CORDEX dataset includes hourly precipitation data. For impact modellers however it is important to get data for the extreme events with higher resolution in time. In this study, we present a deep‑learning‑based framework to temporally downscale hourly ICON precipitation to 10‑minute resolution using a convolutional U‑Net architecture.

The source data consist of two input images corresponding to 1-hour accumulated precipitation fields. The target data are 10-minute precipitation fields derived from ICON simulations. The model is trained and evaluated over the following periods: 1980–1994 for training, 1995–1997 for validation, and 1998–1999 for testing. The model learns a mapping from the source data to the corresponding sequences of 10-minute precipitation. The U‑Net is trained to reconstruct the temporal distribution of rainfall within each hour while conserving the total hourly precipitation amount. We test the enforcement of conservation of total hourly precipitation with different techniques: a penalty term in the loss function, a constraint layer embedded into the architecture and conservation through a post-processing routine.

Model performance is evaluated using multiple statistical metrics to assess both the distribution and magnitude of precipitation. The histograms of predicted and target 10‑minute precipitation indicate that the model reproduces the marginal distribution well, while the scatter plot of total predicted versus total target precipitation summed over all grid cells and time steps shows that the model closely preserves the overall accumulated rainfall. Results also demonstrate that the U‑Net with the conservation enforcing constraint layer successfully reproduces sub‑hourly precipitation variability and captures the timing and intensity of short‑duration rainfall events more accurately than simple temporal disaggregation approaches.

This work highlights the potential of machine learning for efficient temporal downscaling of regional climate model outputs. The ultimate goal is to provide a tool for impact modelers to produce high-resolution precipitation data on their own demand . This framework has the potential to support applications in future warming scenarios. Since interested researchers can run the temporal downscaling model for their period of interest, there is no need for large memory resources to store precipitation datasets with a very high temporal resolution.

 

How to cite: Thayyil Mandodi, M., Arnold, C., Paul, K., Greenberg, D., Geyer, B., and Hagemann, S.: Temporal Downscaling of ICON Precipitation from Hourly to 10‑Minute Resolution Using a Physically Constrained U-NET, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-7647, https://doi.org/10.5194/egusphere-egu26-7647, 2026.

A.105
|
EGU26-9227
|
ECS
Hsiang Hsu and Hsing-Jui Wang

The tails of flood distributions provide key insights into the occurrence probability of extreme floods, which is commonly quantified by the shape parameter of an empirical Generalized Extreme Value (GEV) distribution fitted to annual maximum flood series. Despite the usefulness of fitting empirical GEV distributions to observations, considerable uncertainty remains in the estimated shape parameter across different parameter estimation approaches. In addition, most existing studies focus on regional scales, and a global-scale analysis is required to investigate the roles of varying climatic conditions and data quality in shaping extreme flood occurrence.

In this study, we first apply the L-moment method—an approach known for its robustness in extreme value statistics— to conduct a global analysis of extreme flood occurrence based on optimized GEV distributions. The Anderson–Darling test is used to evaluate the goodness-of-fit. We then integrate additional hydrological information, represented by up to 20 descriptors, into a supervised neural network (NN) model to construct a physically informed, data-driven framework for improving the estimation of GEV distribution parameters. A global-scale dataset comprising more than 6,600 river gauges, with record lengths ranging from 20 to 200 years, is used in this analysis.

Preliminary results indicate that the proposed framework can achieve flood distribution tail estimates comparable to those obtained from purely statistical methods (i.e., L-moment estimates), while providing additional physical insights into the estimation process. Overall, this study highlights the potential of integrating multi-dimensional common hydrological descriptors within a data-driven framework to support large-scale and consistent characterization of global flood extremeness.

How to cite: Hsu, H. and Wang, H.-J.: Characterizing Global Flood Extremeness Through Physically Informed Neural Networks, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-9227, https://doi.org/10.5194/egusphere-egu26-9227, 2026.

A.106
|
EGU26-13895
Mojolaoluwa Daramola, Conor Murphy, and Peter Thorne

Reliable high-resolution precipitation datasets are essential for climate analysis, hydrological modelling, and the assessment of climate extremes. Many existing gridded rainfall products are limited by national boundaries, making it difficult to carry out consistent regional-scale climate and hydrological assessments across the island of Ireland. Here, we present a new daily gridded rainfall product developed using a homogenous methodology across the entire island of Ireland. The dataset covers the period 1980-2020 and is based on rain gauge observations from Met Éireann and UK Met Office. The gridded product is generated using a high-resolution climatological interpolation framework based on inverse distance weighting (IDW) regression, with elevation included as a covariate. This approach allows the dataset to capture fine-scale spatial variability associated with orography, while preserving daily variability and extreme rainfall events. The daily grids are first produced at 1km x 1km resolution and then resampled to a common 0.1deg x 0.1deg resolution for comparison with other gridded datasets. To assess the quality of the product, we first validate the gridded rainfall estimates using observations from a crowd-sourced citizens rain gauges from the weather observation website, providing independent evaluation of the dataset. We then evaluate the dataset through grid-to-grid comparisons with Met Éireann daily grids and other widely used regional products such as E-OBS and Multi-Source Weighted-Ensemble Precipitation (MSWEP), focusing on annual and seasonal rainfall patterns, spatial biases, and selected storm events. The new datasets provides a spatially consistent representation of daily rainfall across the island of Ireland and offers a valuable resource for climate variability studies, extreme event analysis, and hydrological applications.  

How to cite: Daramola, M., Murphy, C., and Thorne, P.: A new daily gridded precipitation dataset for the island of Ireland, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-13895, https://doi.org/10.5194/egusphere-egu26-13895, 2026.

A.107
|
EGU26-14665
|
ECS
Can generative AI models downscale very rare precipitation events? An illustration of the 2020 south of France flash flood.
(withdrawn)
Pierre Chapel, Kishanthan Kingston, Olivier Boucher, Freddy Bouchet, Kazem Ardaneh, and Redouane Lguensat
A.108
|
EGU26-15630
|
ECS
Hao Wu

Remote sensing technology is essential for real-time monitoring of spatiotemporal precipitation patterns. However, inherent limitations in indirect observation lead to significant errors in satellite-based precipitation products. Most existing correction methods depend on real-time ground observations, which limits their applicability for high-precision, operational use. To address this, we propose a two-stage synergistic correction framework specifically for the Global Satellite Mapping of Precipitation Near Real-Time product (GSMaP-NRT), with the goal of systematically enhancing the accuracy of its daily-scale estimates worldwide. Central to this framework is the Terrain-aware Two-stage Correction Framework (TTCF-NRT). In the first stage (historical modeling and real-time correction), we jointly utilize historical GSMaP-NRT and CPC merged precipitation data to train an improved Cumulative Distribution Function (CDF) matching model. Once trained, the model operates independently, requiring only real-time GSMaP-NRT data to perform rapid correction without needing concurrent CPC or ground-based inputs. In the second stage (near-real-time spatial refinement), we integrate the contemporaneous CPC product as a spatial reference into the first-stage corrected output. An improved Convolutional Neural Network (CNN) model, trained and validated through rigorous cross-validation, is then applied for spatial enhancement. This step significantly improves the characterization of precipitation spatial distribution, especially over complex terrain. Using the TTCF-NRT framework, we produced a daily corrected precipitation dataset for global land areas from 2020 to 2024 at a 0.5° spatial resolution. Comprehensive evaluation shows that: (1) globally, the TTCF-RT product significantly outperforms both the original GSMaP-NRT and its gauge-adjusted version (GSMaP-Gauge-NRT) in terms of Root Mean Square Error (RMSE) and Relative Bias (BIAS); (2) regionally, TTCF-NRT excels over the Continental United States (CONUS) and Western Europe. It also demonstrates consistent improvement at independent validation sites across China, though performance can still be enhanced, partly due to the limited spatial representativeness of the training data. In summary, the TTCF-NRT framework effectively combines historically calibrated real-time CDF correction with CNN-driven near-real-time spatial fusion. It offers an efficient, robust, and operationally viable correction solution for GSMaP-NNRT that does not rely on real-time external data. This approach substantially improves the accuracy and practical utility of satellite-derived precipitation estimates on a global scale, particularly in regions with complex topography.

How to cite: Wu, H.: A Terrain-Aware Two-Stage Correction Framework for Near-Real-Time Improvement of GSMaP-NRT Precipitation Estimates, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-15630, https://doi.org/10.5194/egusphere-egu26-15630, 2026.

A.109
|
EGU26-16177
|
ECS
Hari Prakash, Pramod Soni, and Kamlesh Kumar Pandey

Hari Prakasha,  Pramod Soni b K .K Pandeyc

aResearch Scholar, Department of Civil Engineering IIT (BHU), Varanasi (U.P),221005,India,Email:hariprakash.rs.civ23@iitbhu.ac.in

bAssistant Professor,Department of Civil Engineering IIT (BHU), Varanasi(U.P),221005, India. Email: pramod.civ@iitbhu.ac.in

cAssociate Professor,Department of Civil Engineering,IIT(BHU),Varanasi(U.P),221005,India

Email: kkp.civ@iitbhu.ac.in

* Corresponding author: hariprakash.rs.civ23@iitbhu.ac.in

Accurate estimation of flood peaks in ungauged and data-scarce basins critically depends on the accuracy of rainfall inputs, still remains challenging due to the limited availability of ground observations and inherent uncertainties in satellite precipitation datas. Although datasets such as CHIRPS and GPM IMERG provide high-resolution rainfall information, their direct application in hydrological modelling is often constrained by regional bias, spatial scale mismatch, and temporal inconsistencies. Moreover, physically consistent representation of large-scale atmospheric variables is rarely incorporated in conventional bias-correction approaches.To address these limitations, this study proposes an integrated and scalable framework that combines satellite precipitation, ERA5 reanalysis variables, machine learning, and process-based hydrological modelling for flood peak estimation in ungauged basins. The framework is demonstrated over the Varuna River Basin (Varanasi, India). To resolve spatial scale mismatch, ERA5 atmospheric variables are spatially aggregated within an approximately 30 km buffer around each CHIRPS grid point prior to their use as predictors. A time-aware artificial neural network (ANN) is then developed to integrate multi-pixel GPM IMERG rainfall and aggregated ERA5 predictors, using CHIRPS as a reference dataset to generate physically informed, bias-corrected daily rainfall fields. Model robustness is ensured by systematically testing different network architectures with varying numbers of hidden neurons. The framework is implemented over more than one thousand grid cells, ensuring spatial consistency while maintaining computational efficiency.The corrected rainfall products are subsequently used to drive the SWAT hydrological model, and streamflow simulations are calibrated and validated using SWAT-CUP, with particular emphasis on reproducing peak discharge and high-flow extremes. At the daily scale, the proposed framework achieves coefficient of determination (R²) values of up to 0.76 for rainfall estimation, and leads to substantial improvements in streamflow simulation compared to uncorrected satellite rainfall, including reduced bias, improved temporal variability, and markedly enhanced simulation of flood peaks.

How to cite: Prakash, H., Soni, P., and Pandey, K. K.: Downscaling and bias-correcting satellite precipitation using a hybrid machine learning framework for flood modelling in ungauged basins., EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-16177, https://doi.org/10.5194/egusphere-egu26-16177, 2026.

A.110
|
EGU26-16977
|
ECS
Hyeonjin Choi, Quyet The Nguyen, Oldřich Rakovec, Hyungon Ryu, and Seong Jin Noh

Accurate high-resolution precipitation is critical for hydrological modelling, climate impact assessment, and flood risk analysis, yet reanalysis products like ERA5 often lack the necessary spatial detail required at regional scales. This study investigates machine learning-based super-resolution techniques for precipitation downscaling, specifically examining scale-dependency and uncertainty.

We test several downscaling strategies, including convolutional neural networks with channel‑attention mechanisms and generative diffusion models. Precipitation fields are downscaled from coarse-resolution ERA5 inputs (0.25° resolution) to finer spatial resolutions using gridded observational datasets as reference: E‑OBS (0.125°) for pan‑European evaluation and, for selected regions, higher‑resolution products such as EMO‑1 (~1 km). By considering multiple scale factors, we adopt a scale‑aware framework that quantifies how downscaling skill and the associated uncertainty in super-resolution machine learning methods vary with spatial resolution and with the choice of reference dataset.

Model evaluation combines conventional accuracy metrics with diagnostics of field structure, focusing on spatial heterogeneity, intensity‑dependent behaviour (including extremes), and robustness across seasons and climatic regimes. We also discuss how scale‑dependent changes in precipitation variability and spatial structure can inform uncertainty characterisation for machine‑learning downscaling and guide its use in regional hydrological modelling and flood‑risk assessments across Europe.

How to cite: Choi, H., Nguyen, Q. T., Rakovec, O., Ryu, H., and Noh, S. J.: Scale-Aware Machine Learning for Precipitation Downscaling: Impact on Regional Applications in Europe, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-16977, https://doi.org/10.5194/egusphere-egu26-16977, 2026.

A.111
|
EGU26-18828
Antonio Francipane, Niloufar Beikahmadi, Dario Treppiedi, and Leonardo Valerio Noto

Reliable, high-resolution gridded precipitation data are nowadays indispensable for modern climate science, hydrological modeling, and engineering applications, particularly in the Mediterranean region, where sharp topographic gradients and convective dynamics drive significant spatial variability. This study presents the development of a new daily gridded precipitation dataset for Sicily at a 2-km resolution, spanning the period 1951–2025. To address the challenges of reconstructing physically plausible fields from sparse historical records, we propose a "Conditional Two-Phase Reconstruction" framework that explicitly separates rainfall occurrence from conditional magnitude.

The methodology integrates heterogeneous in-situ observational sources, merging long-term historical archives with a modern, high-density automated rain gauge network. A core innovation of this work lies in the transfer of spatial model structures and precipitation regime definitions learned from the short-term dense network to the data-scarce historical period.

The framework first models spatial intermittency (Phase I) using regime-specific Indicator Kriging to distinguish between widespread precipitation and localized convective events. Subsequently, for magnitude estimation (Phase II), the study evaluates and implements three competing approaches: Geostatistical interpolation, hybrid Regression-Kriging utilizing Generalized Additive Models (GAMs), and Machine Learning via Extreme Gradient Boosting (XGBoost). To capture non-linear atmospheric interactions, the reconstruction leverages static physiographic predictors alongside dynamic atmospheric covariates derived from ERA5 reanalysis data, including Convective Available Potential Energy (CAPE) and Vertical Integrated Moisture Flux Divergence (VIMFD). By stratifying events into hydrometeorological regimes based on spatial coverage and intensity, the proposed framework provides a transferable blueprint for climate reconstruction in complex orographic domains. Models’ performance is evaluated through comprehensive Leave-One-Out cross validation using uncertainty and prediction error metrics.

How to cite: Francipane, A., Beikahmadi, N., Treppiedi, D., and Noto, L. V.: A Novel Conditional Two-Phase Framework for High-Resolution Long-Term Precipitation Reconstruction: The Case of Sicily (1951–2025), EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-18828, https://doi.org/10.5194/egusphere-egu26-18828, 2026.

A.112
|
EGU26-19062
|
ECS
Joshua Miller, Peter Watson, Kate Halladay, and Rachel James

Climate models produce enormous amounts of atmospheric data. However, these models often have very large spatial resolution, making hazard-scale, e.g. an individual city or catchment, forecasts based on future climate data impossible. Diffusion models (DMs) are a class of deep-learning generative models that can rapidly produce ensemble-like realisations of high-resolution weather states, allowing for uncertainty quantification. Numerous studies have demonstrated the efficacy of these models in faithfully downscaling weather variables from both observational datasets and from global climate models to regional climate models. However, little is known about how well DMs can perform when trained and evaluated on heterogeneous and multi-source datasets, and even less regarding their ability to faithfully emulate high-resolution extreme rainfall events. To evaluate this, we train a DM to emulate 0.1° by 0.1° hourly precipitation data from IMERG (satellite-based), using hourly 1° by 1° atmospheric fields from ERA5 (reanalysis) as the model’s input. We are also performing an out-of-distribution experiment in which extreme events are excluded from the DM’s training data in order to investigate to what extent it can accurately extrapolate to severe weather. Our domain is centred in southern Europe and was chosen to cover many diverse regions, including the Alps, Mediterranean Ocean, and northern Africa. According to continuous rank probability score, power spectral density, histograms and many other metrics, after training on balanced data our DM accurately downscales precipitation across all rainfall intensity levels, preserves fine-scale spatial structures, learns regional precipitation dynamics, and captures extreme events in the tails of the distribution. Our DM also outperforms a strong climatological baseline, and it is superior to other commonly used models such as a deterministic deep convolutional network, which tends to over-smooth and underestimate extreme events. Our results affirm the ability of diffusion models to generate robust, hazard-relevant rainfall realisations using coarse atmospheric data.

How to cite: Miller, J., Watson, P., Halladay, K., and James, R.: Diffusion model based downscaling of extreme precipitation in southern Europe, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-19062, https://doi.org/10.5194/egusphere-egu26-19062, 2026.

Login failed. Please check your login data.