EOS4.4 | BUGS: Blunders, Unexpected Glitches, and Surprises
EDI
BUGS: Blunders, Unexpected Glitches, and Surprises
Co-organized by AS5/BG10/CL5/ERE6/ESSI3/GD4/GM1/GMPV1/NP8/PS/SM9/SSP1/SSS11/TS10
Convener: Ulrike ProskeECSECS | Co-conveners: Jonas PyschikECSECS, Nobuaki Fuji, Martin GauchECSECS, Daniel KlotzECSECS
Orals
| Tue, 05 May, 14:00–15:45 (CEST)
 
Room 0.15
Posters on site
| Attendance Tue, 05 May, 10:45–12:30 (CEST) | Display Tue, 05 May, 08:30–12:30
 
Hall X5
Posters virtual
| Fri, 08 May, 14:21–15:45 (CEST)
 
vPoster spot 5, Fri, 08 May, 16:15–18:00 (CEST)
 
vPoster Discussion
Orals |
Tue, 14:00
Tue, 10:45
Fri, 14:21
Sitting under a tree, you feel the spark of an idea, and suddenly everything falls into place. The following days and tests confirm: you have made a magnificent discovery — so the classical story of scientific genius goes…

But science as a human activity is error-prone, and might be more adequately described as "trial and error". Handling mistakes and setbacks is therefore a key skill of scientists. Yet, we publish only those parts of our research that did work. That is also because a study may have better chances to be accepted for scientific publication if it confirms an accepted theory or reaches a positive result (publication bias). Conversely, the cases that fail in their test of a new method or idea often end up in a drawer (which is why publication bias is also sometimes called the "file drawer effect"). This is potentially a waste of time and resources within our community, as other scientists may set about testing the same idea or model setup without being aware of previous failed attempts.

Thus, we want to turn the story around, and ask you to share 1) those ideas that seemed magnificent but turned out not to be, and 2) the errors, bugs, and mistakes in your work that made the scientific road bumpy. In the spirit of open science and in an interdisciplinary setting, we want to bring the BUGS out of the drawers and into the spotlight. What ideas were torn down or did not work, and what concepts survived in the ashes or were robust despite errors?

We explicitly solicit Blunders, Unexpected Glitches, and Surprises (BUGS) from modeling and field or lab experiments and from all disciplines of the Geosciences.

In a friendly atmosphere, we will learn from each other’s mistakes, understand the impact of errors and abandoned paths on our work, give each other ideas for shared problems, and generate new insights for our science or scientific practice.

Here are some ideas for contributions that we would love to see:
- Ideas that sounded good at first, but turned out to not work.
- Results that presented themselves as great in the first place but turned out to be caused by a bug or measurement error.
- Errors and slip-ups that resulted in insights.
- Failed experiments and negative results.
- Obstacles and dead ends you found and would like to warn others about.

For inspiration, see last year's collection of BUGS - ranging from clay bricks to atmospheric temperature extremes - at https://meetingorganizer.copernicus.org/EGU25/session/52496.

Orals: Tue, 5 May, 14:00–15:45 | Room 0.15

The oral presentations are given in a hybrid format supported by a Zoom meeting featuring on-site and virtual presentations. The button to access the Zoom meeting appears just before the time block starts.
Chairpersons: Ulrike Proske, Jonas Pyschik, Martin Gauch
14:00–14:05
Experimental work and observations
14:05–14:15
|
EGU26-5494
|
On-site presentation
Eva Y. Pfannerstill

Observation of atmospheric constituents and processes is not easy. As atmospheric chemists, we use sensitive equipment, for example mass spectrometers, that we often set up in a (remote) location or on a moving platform for a few-weeks campaign to make in-situ observations. All this with the goal of explaining more and more atmospheric processes, and to verify and improve atmospheric models. However, glitches can happen anywhere in an experiment, be it in the experimental design, setup, or instrumental performance. Thus, complete data coverage during such a campaign is not always a given, resulting in gaps in (published) datasets. And the issue with air is that you can never go back and measure the exact same air again. Here, I would like to share some stories behind such gaps, and what we learned from them. This presentation aims to encourage early career researchers who might be struggling with feelings of failure when bugs, blunders and glitches happen in their experiments - you are not alone! I will share what we learned from these setbacks and how each of them improved our experimental approaches.

How to cite: Pfannerstill, E. Y.: Why are there gaps in your measurements? Sharing the stories behind the missing datapoints, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-5494, https://doi.org/10.5194/egusphere-egu26-5494, 2026.

14:15–14:25
|
EGU26-16619
|
On-site presentation
Okke Batelaan, Joost Herweijer, Steven Young, and Phil Hayes

“It is in the tentative stage that the affections enter with their blinding influence. Love was long since represented as blind…The moment one has offered an original explanation for a phenomenon which seems satisfactory, that moment affection for his intellectual child springs into existence…To guard against this, the method of multiple working hypotheses is urged. … The effort is to bring up into view every rational explanation of new phenomena, and to develop every tenable hypothesis respecting their cause and history. The investigator thus becomes the parent of a family of hypothesis: and, by his parental relation to all, he is forbidden to fasten his affections unduly upon any one” (Chamberlin, 1890).

The MADE (macro-dispersion) natural-gradient tracer field experiments were conducted more than 35 years ago. It aimed to determine field-scale dispersion parameters based on detailed hydraulic conductivity measurements to support transport simulation. A decade of field experiments produced a 30-year paper trail of modelling studies with no clear resolution of a successful simulation approach for practical use in transport problems.  As a result, accurately simulating contaminant transport in the subsurface remains a formidable challenge in hydrogeology.

What went awry, and why do we often miss the mark?

Herweijer et al. (2026) conducted a ‘back to basics’ review of the original MADE reports and concluded that there are significant inconvenient and unexplored issues that influenced the migration of the tracer plume and or biased observations. These issues include unreliable measurement of hydraulic conductivity, biased tracer concentrations, and underestimation of sedimentological heterogeneity and non-stationarity of the flow field. Many studies simulating the tracer plumes appeared to have ignored, sidestepped, or been unaware of these issues, raising doubts about the validity of the results.

Our analysis shows that there is a persistent drive among researchers to conceptually oversimplify natural complexity to enable testing of single-method modelling, mostly driven by parametric stochastic approaches. Researchers tend to be anchored to a specialised, numerically driven methodology and have difficulty in unearthing highly relevant information from ‘unknown known’ data or applying approaches outside their own specialised scientific sub-discipline. Another important aspect of these ‘unkowns knowns’ is the tendency to accept published data verbatim. Too often, there is no rigorous investigation of the original measurement methods and reporting, and, if need be, additional testing to examine the root cause of data issues.

Following the good old advice of Chamberlin (1890), we used a knowledge framework to systematically assess knowns, unknowns, and associated confidence levels, yielding a set of multi-conceptual models. Based on identified 'unknowns', these multi-models can be tested against reliable 'knowns' such as piezometric data and mass balance calculations.  

Chamberlin, T.C., 1890, The method of multiple working hypotheses. Science 15(366): 92-96. doi:10.1126/science.ns-15.366.92.

Herweijer J.C., S. C Young, P. Hayes, and O. Batelaan, 2026, A multi-conceptual model approach to untangling the MADE experiment, Accepted for Publication in Groundwater.

How to cite: Batelaan, O., Herweijer, J., Young, S., and Hayes, P.: The unknown knowns – the inconvenient knowledge in hydrogeology we do not like to use, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-16619, https://doi.org/10.5194/egusphere-egu26-16619, 2026.

Modelling
14:25–14:35
|
EGU26-10401
|
ECS
|
On-site presentation
Ralf Loritz, Alexander Dolich, and Benedikt Heudorfer

Hydrological modelling has long been shaped by a steady drive toward ever more sophisticated models. In the era of machine learning, this race has turned into a relentless pursuit of complexity: deeper networks and ever more elaborate architectures that often feel outdated by the time the ink on the paper is dry. Motivated by a genuine belief in methodological progress, I, like many others, spent considerable effort exploring this direction, driven by the assumption that finding the “right” architecture or model would inevitably lead to better performance. This talk is a reflection on that journey; you could say my own Leidensweg. Over several years, together with excellent collaborators, I explored a wide range of state-of-the-art deep-learning approaches for rainfall–runoff modelling and other hydrological modelling challenges. Yet, regardless of the architecture or training strategy, I repeatedly encountered the same performance ceiling. In parallel, the literature appeared to tell a different story, with “new” models regularly claiming improvements over established baselines. A closer inspection, however, revealed that rigorous and standardized benchmarking is far from common practice in hydrology, making it difficult to disentangle genuine progress from artefacts of experimental design. What initially felt like a failure to improve my models turned out to be a confrontation with reality. The limiting factor was not the architecture, but the problem itself. We have reached a point where predictive skill is increasingly bounded by the information content of our benchmark datasets and maybe more importantly by the way we frame our modelling challenges, rather than by model design. Like many others, I have come to believe that if we want to move beyond the current performance plateau, the next breakthroughs are unlikely to come from ever more complex models alone. Instead, as a community, we need well-designed model challenges, better benchmarks, and datasets that meaningfully expand the information available to our models to make model comparisons more informative.

How to cite: Loritz, R., Dolich, A., and Heudorfer, B.: The empty mine: Why better tools do not help you find new diamonds, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-10401, https://doi.org/10.5194/egusphere-egu26-10401, 2026.

14:35–14:45
|
EGU26-14374
|
On-site presentation
Thorsten Wagener and Francesca Pianosi

Hydrological and water systems modelling has long been driven by the search for better models. We do so by searching for models or at least parameter combinations that provide the best fit to given observations. We ourselves have contributed to this effort by developing new methods and by publishing diverse case studies. However, we repeatedly find that searching for and finding an optimal model is highly fraught in the presence of unclear signal-to-noise ratios in our observations, of incomplete models and of highly imbalanced databases. We present examples of our own work through which we have realized that achieving optimality was possible but futile unless we give equal consideration to issues of consistency, robustness and problem framing. We argue here that the strong focus on optimality continues to be a hindrance for advancing hydrologic science and for transferring research achievements into practice – probably more so than in other areas of the geosciences.

How to cite: Wagener, T. and Pianosi, F.: The dangerous temptation of optimality in hydrological and water resources modelling, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-14374, https://doi.org/10.5194/egusphere-egu26-14374, 2026.

14:45–14:55
|
EGU26-4196
|
On-site presentation
Stefan Hergarten

Coming from geosciences, we hopefully know what we want to do. Coming from numerics, however, we often know quite well what we are able to do and look for a way to sell it to the community. A few years ago, deep-learning techniques brought new life into the glaciology community. These approaches  allowed for simulations of glacier dynamics at an unprecedented computational performance and motivated several researchers to tackle the numerous open questions about past and present glacier dynamics, particularly in alpine regions. From another point of view, however, it was also tempting to demonstrate that the human brain is still more powerful than artificial intelligence by developing a new classical numerical scheme that can compete with deep-learning techniques concerning its efficiency.

Starting point was, of course, the simplest approximation to the full 3-D Stokes equations, the so-called shallow ice approximation (SIA). Progress was fast and the numerical performance was even better than expected. The new numerical scheme enabled simulations with spatial resolutions of 25 m on a desktop PC, while previous schemes did not reach simulations below a few hundred meters.

However, the enthusiasm pushed the known limitations of the SIA a bit out of sight. Physically, the approximation is quite bad on rugged terrain, particularly in narrow valleys. So the previous computational limitations have been replaced by physical limitations since high resolutions are particularly useful for rugged topographies. In other words, a shabby house has a really good roof now.

What are the options in such a situation?

  • Accept that there is no free lunch and avoid contact to the glacialogy community in the future.
  • Continue the endless discussion about the reviewers' opinion that a spatial resolution of 1 km is better than 25 m.
  • Find a real-world data set that matches the results of the model and helps to talk the problems away.
  • Keep the roof and build a new house beneath. Practically, this would be developing a new approximation to the full 3-D Stokes equations that is compatible to the numerical scheme and reaches an accuracy similar to those of the existing approximations.
  • Take the roof and put it on one of the existing solid houses. Practically, this would be an extension of the numerical scheme towards more complicated systems of differential equations. Unfortunately, efficient numerical schemes are typically very specific. So the roof will not fit easily and it might leak.

The story is open-ended, but there will be at least a preliminary answer in the presentation.

 

How to cite: Hergarten, S.: How useful is a new roof on a shabby house? An example from glacier modeling, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-4196, https://doi.org/10.5194/egusphere-egu26-4196, 2026.

14:55–15:05
|
EGU26-6794
|
ECS
|
On-site presentation
Nadja Omanovic, Sylvaine Ferrachat, and Ulrike Lohmann

In atmospheric sciences, a central tool to test hypotheses are numerical models, which aim to represent (part of) our environment. One such model is the weather and climate model ICON [1], which solves the Navier-Stokes equation for capturing the dynamics and parameterizes subgrid-scale processes, such as radiation, cloud microphysics, and aerosol processes. Specifically, for the latter exists the so-called Hamburg Aerosol Module (HAM [2]), which is coupled to ICON [3] and predicts the evolution of aerosol populations using two moments (mass mixing ratio and number concentration). The high complexity of aerosols is reflected in the number of aerosol species (total of 5), number of modes (total of 4), and their mixing state and solubility. The module calculates aerosol composition and number concentration, their optical properties, their sources and sinks, and their interactions with clouds via microphysical processes. Aerosol emissions are sector-specific and based on global emission inventories or dynamically computed.

Within our work, we stumbled upon an interesting pattern occurrence in our simulations upon changing/turning off single emission sectors. If we, e.g., removed black carbon from aircraft emissions, the strongest changes emerged over the African continent, which is not the region where we were expecting to see the strongest response. Further investigations revealed that this pattern emerges independently of the emission sector as well as species, confirming our suspicion that we are facing a bug within HAM. Here, we want to present how we approached the challenge of identifying and tackling a bug within a complex module with several thousand lines of code.

 

[1] G. Zängl, D. Reinert, P. Ripodas, and M. Baldauf, “The ICON (ICOsahedral Non-hydrostatic) modelling framework of DWD and MPI-M: Description of the non-hydrostatic dynamical core,” Quarterly Journal of the Royal Meteorological Society, vol. 141, no. 687, pp. 563–579, 2015, ISSN: 1477-870X. DOI: 10.1002/qj.2378

[2] P. Stier, J. Feichter, S. Kinne, S. Kloster, E. Vignati, J. Wilson, L. Ganzeveld, I. Tegen, M. Werner, Y. Balkanski, M. Schulz, O. Boucher, A. Minikin, and A. Petzold, “The aerosol-climate model ECHAM5-HAM,” Atmospheric Chemistry and Physics, 2005. DOI: 10.5194/acp-5-1125-2005

[3] M. Salzmann, S. Ferrachat, C. Tully, S. M¨ unch, D. Watson-Parris, D. Neubauer, C. Siegenthaler-Le Drian, S. Rast, B. Heinold, T. Crueger, R. Brokopf, J. Mülmenstädt, J. Quaas, H. Wan, K. Zhang, U. Lohmann, P. Stier, and I. Tegen, “The Global Atmosphere-aerosol Model ICON-A-HAM2.3–Initial Model Evaluation and Effects of Radiation Balance Tuning on Aerosol Optical Thickness,” Journal of Advances in Modeling Earth Systems, vol. 14, no. 4,e2021MS002699, 2022, ISSN: 1942-2466. DOI: 10.1029/2021MS002699

How to cite: Omanovic, N., Ferrachat, S., and Lohmann, U.: Back to square one (again and again): Finding a bug in a complex global atmospheric model  , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-6794, https://doi.org/10.5194/egusphere-egu26-6794, 2026.

15:05–15:15
|
EGU26-14148
|
solicited
|
Highlight
|
On-site presentation
Bjorn Stevens, Marco Giorgetta, and Hans Segura

A defining attribute of global-storm resolving models is that modelling is replaced by simulation.  In addition to overloading the word “model”  this avails the developer of a much larger variety of tests, and brings about a richer interplay with their intuition.  This has proven helpful in identifying and correcting many mistakes in global-storm resolving models that traditional climate models find difficult to identify, and usually compensate by “tuning.”  It also means that storm-resolving models are built and tested in a fundamentally different way than are traditional climate models. In this talk I will review the development of ICON as a global storm resolving model to illustrate how this feature, of trying to simulate rather than model the climate system, has helped identify a large number of long-standing bugs in code bases inherited from traditional models; how this can support open development; and how sometimes these advantages also prove to be buggy.

How to cite: Stevens, B., Giorgetta, M., and Segura, H.: Buggy benefits of more fundamental climate models, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-14148, https://doi.org/10.5194/egusphere-egu26-14148, 2026.

Fieldwork
15:15–15:25
|
EGU26-8228
|
On-site presentation
Jan Henneberger

In situ cloud measurements are essential for understanding atmospheric processes and establishing a reliable ground truth. Obtaining these data is rarely straightforward. Challenges range from accessing clouds in the first place to ensuring that the instrument or environment does not bias the sample. This contribution explores several blunders and unexpected glitches encountered over fifteen years of field campaigns.

I will share stories of mountain top observations where blowing snow was measured instead of cloud ice crystals and the ambitious but failed attempt to use motorized paragliders for sampling. I also reflect on winter campaigns where the primary obstacles were flooding and mud rather than cold and snow. While these experiences were often frustrating, they frequently yielded useful data or led to new insights. One such example is the realization that drone icing is not just a crash risk but can also serve as a method for measuring liquid water content. By highlighting these setbacks and the successful data that emerged despite them, I aim to foster a discussion on the value of trial and error and persistence in atmospheric physics.

How to cite: Henneberger, J.: How Not to Measure a Cloud: Lessons from Fifteen Years of Fieldwork Failures, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-8228, https://doi.org/10.5194/egusphere-egu26-8228, 2026.

15:25–15:35
|
EGU26-13630
|
ECS
|
On-site presentation
Larisa Tarasova and Paul Astagneau

Examining catchment response to precipitation at event scale is useful for understanding how various hydrological systems store and release water. Many of such event scale characteristics, for example event runoff coefficient and event time scale are also important engineering metrics used for design. However, deriving these characteristics requires identification of discrete precipitation-streamflow events from continuous hydrometeorological time series.

Event identification is not at all a trivial task. It becomes even more challenging when working with very large datasets that encompass a wide range of spatial and temporal dynamics. Approaches range from visual expert judgement to baseflow-separation-based methods and objective methods based on the coupled dynamics of precipitation and streamflow. Here, we would like to present our experience in the quest to devise the “ideal” method for large datasets – and trust us, we tried, a lot. We demonstrate that expert-based methods can be seriously flawed simply by changing a few meta parameters, such as the length of displayed periods, baseflow-separation-based methods deliver completely opposite results when different underlying separation methods are selected, and objective methods suddenly fail when dynamics with different temporal scales are simultaneously present.

Ultimately, we realized that finding a one-size-fits-all method was not possible and that compromises had to be made to select sufficiently representative events across large datasets. Therefore, we advocate for pragmatic case-specific evaluation criteria and for transparency in event identification to make study results reproducible and fit for purpose, if not perfect.

How to cite: Tarasova, L. and Astagneau, P.: How NOT to identify streamflow events?, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-13630, https://doi.org/10.5194/egusphere-egu26-13630, 2026.

15:35–15:45
|
EGU26-4074
|
On-site presentation
James Kirchner, Paolo Benettin, and Ilja van Meerveld

BUGS can arise in individual research projects, but also at the level of communities of researchers, leading to shifts in the scientific consensus.  These community-level BUGS typically arise from observations that are surprising to (or previously overlooked by) substantial fractions of the research community.  In this presentation, we summarize several community-level BUGS in our field: specifically, key surprises that have transformed the hydrological community's understanding of hillslope and catchment processes in recent decades.  

Here are some examples.  (1) Students used to learn (and some still do today) that storm runoff is dominated by overland flow.  But stable isotope tracers have convincingly shown instead that even during storm peaks, streamflow is composed mostly of water that has been stored in the landscape for weeks, months, or years.  (2) Maps, and most hydrological theories, have typically depicted streams as fixed features of the landscape.  But field mapping studies have shown that stream networks are surprisingly dynamic, with up to 80% of stream channels going dry sometime during the year.  (3) Textbooks have traditionally represented catchment storage as a well-mixed box.  But tracer time series show fractal scaling that cannot be generated by well-mixed boxes, forcing a re-think of our conceptualization of subsurface storage and mixing.  (4) Waters stored in aquifers, and the waters that drain from them, have traditionally been assumed to share the same age.  But tracers show that waters draining from aquifers are often much younger than the groundwaters that are left behind, and this was subsequently shown to be an inevitable result of aquifer heterogeneity. 

Several examples like these, and their implications, will be briefly discussed, with an eye to the question: how can we maximize the chances for future instructive surprises?

How to cite: Kirchner, J., Benettin, P., and van Meerveld, I.: Instructive surprises in the hydrological functioning of landscapes, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-4074, https://doi.org/10.5194/egusphere-egu26-4074, 2026.

Posters on site: Tue, 5 May, 10:45–12:30 | Hall X5

The posters scheduled for on-site presentation are only visible in the poster hall in Vienna. If authors uploaded their presentation files, these files are linked from the abstracts below.
Display time: Tue, 5 May, 08:30–12:30
Chairpersons: Nobuaki Fuji, Daniel Klotz
X5.318
|
EGU26-2771
John Hillier, Ulrike Proske, Stefan Gaillard, Theresa Blume, and Eduardo Queiroz Alves

Moments or periods of struggle not only propel scientists forward, but sharing these experiences can also provide valuable lessons for others. Indeed, the current bias towards only publishing ‘positive’ results arguably impedes scientific progress as mistakes that are not learnt from are simply repeated. Here we present a new article type in EGU journals covering LESSONS learnt to help overcome this publishing bias. LESSONS articles describe the Limitations, Errors, Surprises, Shortcomings, and Opportunities for New Science emerging from the scientific process, including non-confirmatory and null results. Unforeseen complications in investigations, plausible methods that failed, and technical issues are also in scope. LESSONS thus fit the content of the BUGS session and can provide an outlet for articles based on session contributions. Importantly, a LESSONS Report will offer a substantial, valuable insight. LESSONS Reports are typically short (1,000-2,000 words) to help lower the barrier to journal publication, whilst LESSONS Posts (not peer-reviewed, but with a DOI on EGUsphere) can be as short as 500 words to allow early-stage reporting. LESSONS aim to destigmatise limitations, errors, surprises and shortcomings and to add these to the published literature as opportunities for new science – we invite you to share your LESSONS learnt.

 

Finally, a big thank you from this paper’s ‘core’ writing team to the wider group who have helped shape the LESSONS idea since EGU GA in 2025, including PubCom and in particular its Chair Barbara Ervens.

How to cite: Hillier, J., Proske, U., Gaillard, S., Blume, T., and Queiroz Alves, E.: New EGU Manuscript Types: Limitations, Errors, Surprises, and Shortcomings as Opportunities for New Science (LESSONS), EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-2771, https://doi.org/10.5194/egusphere-egu26-2771, 2026.

X5.319
|
EGU26-18600
Edward Williamson, Matt Pritchard, Alan Iwi, Sam Pepler, and Graham Parton

On 18 November 2025, a small error during internal data migration of between storage systems of the JASMIN data analysis platform in the UK led to a substantial part of the CEDA Archive being made temporarily unavailable online (but not lost!). The unfortunate incident caused serious disruption to a large community of users (and additional workload and stress for the team), it provided important learning points for the team in terms of:  

  • enhancing data security,  
  • importance of mutual support among professional colleagues,  
  • the value of clear and transparent communications with your users 
  • a unique opportunity to showcase the capabilities of a cutting-edge digital research infrastructure in the recovery and return to service with this “unscheduled disaster recovery exercise”. 

 

We report on the circumstances leading to the incident, the lessons learned, and the technical capabilities employed in the recovery. One example shows, nearly 800 Terabytes of data transferred from a partner institution in the USA in just over 27 hours, at a rate of over 8 Gigabytes per second using Globus. The ability to orchestrate such a transfer is the result of many years of international collaboration to support large-scale environmental science, and highlights the benefits of a federated, replicated data infrastructure built on well-engineered technologies.

How to cite: Williamson, E., Pritchard, M., Iwi, A., Pepler, S., and Parton, G.: Data Disaster to Data Resilience: Lessons from CEDA’s Data Recovery , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-18600, https://doi.org/10.5194/egusphere-egu26-18600, 2026.

Experimental work and observations
X5.320
|
EGU26-5771
Thomas Cameron Meisel

Over a 24-year research period, three successive experimental investigations led to three publications, each of which falsified the author’s preceding hypothesis and proposed a revised conceptual framework. Despite an initial confidence in having identified definitive solutions, subsequent experimental evidence consistently demonstrated the limitations and inaccuracies of earlier interpretations. This iterative process ultimately revealed that samples, in particular geological reference materials, sharing identical petrographic or mineralogical descriptions are not necessarily chemically equivalent and can exhibit markedly different behaviors during chemical digestion procedures. These findings underscore the critical importance of continuous hypothesis testing, self-falsification, and experimental verification in scientific research, particularly when working with reference materials assumed to be identical. I will be presenting data on the analysis of platinum group elements (PGE) and osmium isotopes in geological reference materials (chromitites, ultramafic rocks and basalts), which demonstrates the need for challenging matrices for method validation. 

How to cite: Meisel, T. C.: Self-falsification as a driver of scientific progress: Insights from long-term experimental research, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-5771, https://doi.org/10.5194/egusphere-egu26-5771, 2026.

X5.321
|
EGU26-14852
Attila Nemes, Pietro Bazzocchi, Sinja Weiland, and Martine van der Ploeg

Predicting soil hydraulic behavior is necessary for the modeling of catchments and agricultural planning, particularly for a country like Norway where only 3% of land is suitable for farming. Soil texture is an important and easily accessible parameter for the prediction of soil hydraulic behavior. However, some Norwegian farmland soils, which formed as glacio-marine sediments and are characterized by a medium texture, have shown the hydraulic behavior of heavy textured soils. Coined by the theory behind well-established sedimentation-enhancing technology used in waste water treatment, we hypothesized that sedimentation under marine conditions may result in specific particle sorting and as a result specific pore system characteristics. To test this, we designed four custom-built devices to produce artificially re-sedimented columns of soil material to help characterize the influence of sedimentation conditions. We successfully produced column samples of the same homogeneous mixture of fine-sand, silt, and clay particles obtained by physically crushing and sieving (< 200 µm) subsoil material collected at the Skuterud catchment in South-East Norway, differing only in sedimentation conditions (deionized water vs 35 g per liter NaCl solution). Then, the inability of standard laboratory methods to measure the saturated hydraulic conductivity of such fine material, led us to “MacGyver” (design and custom-build) two alternative methodologies to measure that property, i.e. i) by adapting a pressure plate extractor for a constant head measurement and ii) by building a 10 m tall pipe-system in a common open area of the office, in order to increase the hydraulic head on the samples. There was a learning curve with both of those methods, but we have found that the salt-water re-sedimented columns were about five times more permeable than the freshwater ones, which was the complete opposite of our expectations. However, an unexpected blunder in the conservation of our samples suggests that our hypothesis should be further explored rather than dismissed. These contributions hint about the mechanisms that may underlie the anomalous hydraulic behaviour of certain Norwegian soils and raise new questions on the formation of marine clays, improving knowledge available for land managers and modellers.

 

How to cite: Nemes, A., Bazzocchi, P., Weiland, S., and van der Ploeg, M.: Some Norwegian soils behave differently: is it an inheritance from marine sedimentation?, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-14852, https://doi.org/10.5194/egusphere-egu26-14852, 2026.

X5.322
|
EGU26-3077
|
ECS
Anna Jędras and Jakub Matusik

Photocatalysis is frequently presented in the literature as a straightforward route toward efficient degradation of pollutants, provided that the “right” material is selected. Layered double hydroxides (LDH) are often highlighted as promising photocatalysts due to their tunable composition and reported activity in dye degradation. Motivated by these claims, this study evaluated LDH as mineral analogs for photocatalytic water treatment, ultimately uncovering a series of unexpected limitations, methodological pitfalls, and productive surprises.

In the first stage, Zn/Cr, Co/Cr, Cu/Cr, and Ni/Cr LDHs were synthesized and tested for photocatalytic degradation of methylene blue (0.02 mM) and Acid Blue Dye 129 (0.3 mM). Contrary to expectations,1 photocatalytic performance was consistently low. After one hour of irradiation, concentration losses attributable to photocatalysis did not exceed 15%, while most dye removal resulted from adsorption. Despite extensive efforts to optimize synthesis protocols, catalyst composition, and experimental conditions, this discrepancy with previously published studies could not be resolved.

To overcome limitations related to particle dispersion, surface accessibility, and charge-carrier separation, a second strategy was pursued by incorporating clay minerals as supports.2 Zn/Cr LDH, identified as the most active composition in preliminary tests, was coprecipitated with kaolinite, halloysite, and montmorillonite. Experiments with methylene blue (0.1 mM) and Acid Blue 129 (0.3 mM) demonstrated enhanced adsorption capacities. However, photocatalytic degradation efficiencies remained poor, typically below 10% after one hour, indicating that apparent performance gains were largely adsorption-driven rather than photochemical.

This failure proved to be a turning point. Instead of abandoning LDH entirely, they were combined with graphitic carbon nitride (GCN) to form a heterostructure.3 This approach resulted in a dramatic improvement: after optimization of the synthesis protocol, 99.5% of 1 ppm estrone was degraded within one hour.4 Further modifications were explored by introducing Cu, Fe, and Ag into the LDH/GCN system. While Cu and Fe suppressed photocatalytic activity, silver, at an optimized loading, reduced estrone concentrations below the detection limit within 40 minutes.5

This contribution presents a full experimental arc - from promising hypotheses that failed, through misleading adsorption-driven “successes,” to an ultimately effective but non-intuitive solution - highlighting the value of negative results and surprises as drivers of scientific progress.

This research was funded by the AGH University of Krakow, grant number 16.16.140.315.

Literature:

1            N. Baliarsingh, K. M. Parida and G. C. Pradhan, Ind. Eng. Chem. Res., 2014, 53, 3834–3841.

2            A. Í. S. Morais, W. V. Oliveira, V. V. De Oliveira, L. M. C. Honorio, F. P. Araujo, R. D. S. Bezerra, P. B. A. Fechine, B. C. Viana, M. B. Furtini,
              E. C. Silva-Filho and J. A. Osajima, Journal of Environmental Chemical Engineering, 2019, 7, 103431.

3            B. Song, Z. Zeng, G. Zeng, J. Gong, R. Xiao, S. Ye, M. Chen, C. Lai, P. Xu and X. Tang, Advances in Colloid and Interface Science, 2019, 272, 101999.

4            A. Jędras, J. Matusik, E. Dhanaraman, Y.-P. Fu and G. Cempura, Langmuir, 2024, 40, 18163–18175.

5            A. Jędras, J. Matusik, J. Kuncewicz and K. Sobańska, Catal. Sci. Technol., 2025, 15, 6792–6804.

How to cite: Jędras, A. and Matusik, J.: False Starts and Silver Linings: A Photocatalytic Journey with Layered Double Hydroxides, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3077, https://doi.org/10.5194/egusphere-egu26-3077, 2026.

X5.323
|
EGU26-14763
Attila Nemes and Wolfgang Durner

Among soil physical analyses, determination of the soil particle-size distribution (PSD) is arguably the most fundamental. The standard methodology combines sieve analysis for sand fractions with sedimentation-based techniques for silt and clay. Established sedimentation methods include the pipette and hydrometer techniques. More recently, the Integral Suspension Pressure (ISP) method has become available, which derives PSD by inverse modeling of the temporal evolution of suspension pressure measured at a fixed depth in a sedimentation cylinder. Since ISP is based on the same physical principles as the pipette and hydrometer methods, their results should, in principle, agree.

The ISP methodology has been implemented in the commercial instrument PARIO (METER Group, Munich). While elegant, the method relies on pressure change measurements with a resolution of 0.1 Pa (equivalent to 0.01 mm of water column). Consequently, the PARIO manual strongly advises avoiding any mechanical disturbance such as thumping, bumping, clapping, vibration, or other shock events. This warning is essentially precautionary, because to date no systematic experimental investigation of such disturbances has been reported.

To explore this issue, we prepared a single 30 g soil sample following standard PSD procedures and subjected it to 26 PARIO repeated measurement runs over a period of five months, each run lasting 12 h. Between runs, the suspension was remixed but otherwise not altered. The first ten runs (over ten days) were conducted without intentional disturbance to establish baseline repeatability. This was followed by eight runs with deliberately imposed and timed disturbances that generated single or repeated vibrations (“rocking and shocking”). After approximately two and five months, we conducted additional sets of five and three undisturbed runs, respectively.

We report how these mechanical disturbances, along with temperature variations during measurement and the time elapsed since sample pre-treatment, affected the derived PSD. The results provide a first quantitative assessment of how fragile—or robust—the ISP method and PARIO system really are when reality refuses to sit perfectly still.

 

How to cite: Nemes, A. and Durner, W.: Rocking and Shocking the PARIOTM: How Sensitive Is ISP-Based Particle-Size Analysis to Mechanical Disturbance?, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-14763, https://doi.org/10.5194/egusphere-egu26-14763, 2026.

X5.325
|
EGU26-19776
Nathan Steiger

All statistical tools come with assumptions. Yet many scientists treat statistics like a collection of black-box methods without learning the assumptions. Here I illustrate this problem using dozens of studies that claim to show that solar variability is a dominant driver of climate. I find that linear regression approaches are widely misused among these studies. In particular, they often violate the assumption of ‘no autocorrelation’ of the time series used, though it is common for studies to violate several or all of the assumptions of linear regression. The misuse of statistical tools has been a common problem across all fields of science for decades. This presentation serves as an important cautionary tale for the Earth Sciences and highlights the need for better statistical education and for statistical software that automatically checks input data for assumptions.

How to cite: Steiger, N.: Pervasive violation of statistical assumptions in studies linking solar variability to climate, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-19776, https://doi.org/10.5194/egusphere-egu26-19776, 2026.

Fieldwork
X5.326
|
EGU26-20375
|
ECS
Karin Bremer, Maria Staudinger, Jan Seibert, and Ilja van Meerveld

In catchment hydrology, long-term data collection often starts as part of a (doctoral) research project. In some cases, the data collection continues on a limited budget, often using the field protocol and data management plan designed for the initial short-term project. Challenges and issues with the continued data collection are likely to arise, especially when there are multiple changes in the people involved. It is especially difficult for researchers who were not directly involved in the fieldwork to understand the data and must therefore rely on field notes and archived data. They then often encounter issues related to inconsistent metadata, such as inconsistent date-time formats and inconsistent or missing units, missing calibration files, and unclear file and processing script organization.

While the specific issues may sound very case-dependent, based on our own and other’s experiences from various research projects, it appears that many issues recur more frequently than one might expect (or be willing to admit). In this presentation, we will share our experiences with bringing spatially distributed groundwater level data collected in Sweden and Switzerland from the field to ready-to-use files. Additionally, we provide recommendations for overcoming the challenges during field data collection, data organization, documentation, and data processing using scripts. These include having a clear, detailed protocol for in the fieldwork and the data processing steps, and ensuring it is followed. Although protocols are often used, they are frequently not detailed enough or are not used as designed. The protocols might also not take into account the further use of the data, such as for hydrological modelling, beyond field collection. 

How to cite: Bremer, K., Staudinger, M., Seibert, J., and van Meerveld, I.: From Field to File: challenges and recommendations for handling hydrological data, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-20375, https://doi.org/10.5194/egusphere-egu26-20375, 2026.

Modelling
X5.327
|
EGU26-8359
|
ECS
Nils Hohmuth, Nora L. S. Fahrenbach (presenting), Yibiao Zou (presenting), Josephine Reek, Felix Specker, Tom Crowther, and Constantin M. Zohner

Forests are powerful climate regulators: Their CO2 uptake provides a global biogeochemical cooling effect, and in the tropics, this cooling is further strengthened by evapotranspiration. Given that temperature-related mortality is a relevant global health burden, which is expected to increase under climate change, we set out to test what we thought was a promising hypothesis: Can forests reduce human temperature-related mortality from climate change? 

To test this, we used simulated temperature changes to reforestation from six different Earth System Models (ESMs) under a future high-emission scenario, and paired them with age-specific population data and three methodologically different temperature-mortality frameworks (Cromar et al. 2022, Lee et al. 2019, and Carleton et al. 2022). We expected to find a plausible range of temperature-related mortality outcomes attributable to global future forests conservation efforts.

Instead, our idea ran head-first into a messy reality. Firstly, rather than showing a clear consensus, the ESMs produced a wide range of temperature responses to reforestation, varying both in magnitude and sign. This is likely due to the albedo effect, varying climatological tree cover and land use processes implemented by the models, in addition to internal variability which we could not reduce due to the existence of only one ensemble member per model. Consequently, the models disagreed in many regions on whether global forest conservation and reforestation would increase or decrease temperature by the end of the century.

The uncertainties deepened when we incorporated the mortality data. Mortality estimates varied by up to a factor of 10 depending on the ESM and mortality framework used. Therefore, in the end, the models could not even agree on whether forests increased or decreased temperature-related mortality. We found ourselves with a pipeline that amplified uncertainties of both the ESM and mortality datasets.

For now, the question remains wide open: Do trees save us from temperature-related deaths in a warming world, and if so, by how much?

 

* The first two authors contributed equally to this work.

How to cite: Hohmuth, N., Fahrenbach (presenting), N. L. S., Zou (presenting), Y., Reek, J., Specker, F., Crowther, T., and Zohner, C. M.: Do trees save lives under climate change? It’s complicated , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-8359, https://doi.org/10.5194/egusphere-egu26-8359, 2026.

X5.328
|
EGU26-19755
|
ECS
Rémy Lapere, Ruth Price, Louis Marelle, Lucas Bastien, and Jennie Thomas

Aerosol-cloud interactions remain one of the largest uncertainties in global climate modelling. This uncertainty arises because of the dependence of aerosol-cloud interactions on many tightly coupled atmospheric processes; the non-linear response of clouds to aerosol perturbations across different regimes; and the challenge of extracting robust signals from noisy meteorological observations. The problem is particularly acute in the Arctic, where sparse observational coverage limits model constraints, pristine conditions can lead to unexpected behaviour, and key processes remain poorly understood.

A common way to tackle the challenge of uncertainties arising from aerosol-cloud interactions in climate simulations is to conduct sensitivity experiments using cloud and aerosol microphysics schemes based on different assumptions and parameterisations. By comparing these experiments, key results can be constrained by sampling the range of unavoidable structural uncertainties in the models. Here, we apply this approach to a case study of an extreme, polluted warm air mass in the Arctic that was measured during the MOSAiC Arctic expedition in 2020. We simulated the event in the WRF-Chem-Polar regional climate model both with and without the anthropogenic aerosols from the strong pollution event to study the response of clouds and surface radiative balance. To understand the sensitivity of our results to the choice of model configuration, we tested two distinct, widely-used cloud microphysics schemes.

Initial results showed that the two schemes simulated opposite cloud responses: one predicted a surface cooling from the pollution that was reasonably in line with our expectations of the event, while the other predicted the opposite behaviour in the cloud response and an associated surface warming. These opposing effects seemed to suggest that structural uncertainties in the two schemes relating to clean, Arctic conditions was so strong that it even obscured our ability to understand the overall sign of the surface radiative response to the pollution.

However, since significant model development was required to couple these two cloud microphysics schemes to the aerosol fields in our model, there was another explanation that we couldn’t rule out: a bug in the scheme that was producing the more unexpected results. In this talk, we will explore the challenges of simulating the Arctic climate with a state-of-the-art chemistry-climate model and highlight how examples like this underscore the value of our recent efforts to align our collaborative model development with software engineering principles and Open Science best practices.

How to cite: Lapere, R., Price, R., Marelle, L., Bastien, L., and Thomas, J.: Opposite cloud responses to extreme Arctic pollution: sensitivity to cloud microphysics, or a bug?, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-19755, https://doi.org/10.5194/egusphere-egu26-19755, 2026.

X5.329
|
EGU26-4587
Anna Zehrung, Andrew King, Zebedee Nicholls, Mark Zelinka, and Malte Meinshausen

“Show your working!” – is the universal phrase drilled into science and maths students to show a clear demonstration of the steps and thought processes used to reach a solution (and to be awarded full marks on the exam). 

Beyond the classroom, “show your working” becomes the methods section on every scientific paper, and is critical for the transparency and replicability of the study. However, what happens if parts of the method are considered assumed knowledge, or cut in the interests of a word count? 

An inability to fully replicate the results of a study became the unexpected glitch at the start of my PhD. Eager to familiarise myself with global climate model datasets, I set out to replicate the results of a widely cited paper which calculates the equilibrium climate sensitivity (ECS) across 27 climate models. The ECS is the theoretical global mean temperature response to a doubling of atmospheric CO2 relative to preindustrial levels. A commonly used method to calculate the ECS is to apply an ordinary least squares regression to global annual mean temperature and radiative flux anomalies. 

Despite the simplicity of a linear regression between two variables, we obtained ECS estimates for some climate models that differed from those reported in the original study, even though we followed the described methodology. However, the methodology provided only limited detail on how the raw climate model output – available at regional and monthly scales – was processed to obtain global annual mean anomalies. Differences in these intermediate processing steps can, in turn, lead to differences in ECS estimates.

Limited reporting of data-processing steps is common in the ECS literature. Whether these steps are considered assumed knowledge or deemed too simple to warrant explicit description, we demonstrate that, for some models, they can materially affect the resulting ECS estimate. While the primary aim of our study is to recommend a standardised data-processing pathway for ECS calculations, a secondary aim is to highlight the lack of transparency in key methodological details across the literature. A central takeaway is the importance of clearly documenting all processing steps – effectively, to “show your working” – and to emphasise the critical role of a detailed methods section.

How to cite: Zehrung, A., King, A., Nicholls, Z., Zelinka, M., and Meinshausen, M.: The importance of describing simple methods in climate sensitivity literature, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-4587, https://doi.org/10.5194/egusphere-egu26-4587, 2026.

X5.330
|
EGU26-17373
Laetitia Le Pourhiet

Free-slip boundary conditions are routinely used in 3D geodynamic modelling because they reduce computational cost, avoid artificial shear zones at domain edges, and simplify the implementation of large-scale kinematic forcing. However, despite their apparent neutrality, our experiments show that free-slip boundaries systematically generate first-order artefacts that propagate deep into the model interior and can severely distort the interpretation of continental rifting simulations.

Here we present a set of 3D visco-plastic models inspired by the South China Sea (SCS) that were originally designed to study the effect of steady-state thermal inheritance and pluton-controlled crustal weakening. Unexpectedly, in all simulations except those with a very particular inverted rheological profile (POLC), the free-slip boundary on the “Vietnam side” of the domain generated a persistent secondary propagator, producing unrealistic amounts of lithospheric thinning in the southwest corner. This artefact appeared irrespective of crustal rheology, seeding strategy, or the presence of thermal heterogeneities.

We identify three systematic behaviours induced by free-slip boundaries in 3D:
(1) forced rift nucleation at boundary-adjacent thermal gradients,
(2) artificial propagator formation that competes with the intended first-order rifting, and
(3) rotation or shearing of micro-blocks not predicted by tectonic reconstructions.

These artefacts originate from the inability of free-slip boundaries to transmit shear traction, which artificially channels deformation parallel to the boundary when lateral thermal or mechanical contrasts exist. In 3D, unlike in 2D, the combination of oblique extension and boundary-parallel velocity freedom leads to emergent pseudo-transform behaviour that is entirely numerical.

Our results highlight a key negative outcome: free-slip boundaries cannot be assumed neutral in 3D rift models, especially when studying localisation, obliquity, multi-propagator dynamics, or the competition between structural and thermal inheritance. We argue that many published 3D rift models may unknowingly include such artefacts.

 

How to cite: Le Pourhiet, L.: The Hidden Propagator: How Free-Slip Boundaries Corrupt 3D Simulations, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-17373, https://doi.org/10.5194/egusphere-egu26-17373, 2026.

X5.331
|
EGU26-1897
|
ECS
Ryosuke Nagumo, Ross Woods, and Miguel Rico-Ramirez

Since the robust performance of Long Short-Term Memory (LSTM) networks was established, their physics-awareness and interpretability have become central topics in hydrology. Seminal works (e.g., Lees et al. (2022)) have argued that LSTM internal states spontaneously capture hydrological concepts, and suggested that cell states can represent soil moisture dynamics despite not being explicitly trained on such data. Conversely, more recent studies (e.g., Fuente et al. (2024)) demonstrated that mathematical equifinality causes non-unique LSTM representations with different initialisations.

In this work, we report an arguably more systematic "bug" in the software environment that causes instability in internal states. We initially aimed to investigate how internal states behave differently when trained with or without historical observation data. We encountered this issue while reassembling a computational stack and attempting to replicate the initial results, as the original Docker environment was not preserved. While random seeds have been indicated to lead to different internal state trajectories, we found the computational backend (e.g., changing CUDA versions, PyTorch releases, or dependent libraries) also produces them. These are the findings:

  • In gauged catchments: Discharge predictions remained stable (in one catchment, NSE was 0.88 ± 0.01) across computational environments, yet the internal temporal variations (e.g., silhouette, mean, and std of cell states) fluctuated noticeably.
  • In pseudo-ungauged scenarios: The prediction performance itself became more reliant on the computational environment (in the same catchment, NSE dropped to 0.31 ± 0.15), yet the internal temporal variations of the cell states fluctuated only as much as they did during the gauged scenario.

These findings suggests that instability in the computational environment poses not only a risk of altering interpretability in training (by altering internal states) but also casts doubt on reliability in extrapolation (by altering outputs).

It is worth mentioning that we confirmed this is not a replicability issue; completely identical cell states and predictions are produced when the computational environment, seeds, and training data are held constant. We argue that such stability must be established as a standard benchmark before assigning physical meaning to deep learning internals.

How to cite: Nagumo, R., Woods, R., and Rico-Ramirez, M.: The Unreliable Narrator: LSTM Internal States Fluctuate with Software Environments Despite Robust Predictions, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1897, https://doi.org/10.5194/egusphere-egu26-1897, 2026.

X5.332
|
EGU26-21915
Claudia Brauer

In 2014 we developed the Wageningen Lowland Runoff Simulator (WALRUS), a conceptual rainfall-runoff model for catchments with shallow groundwater. Water managers and consultants were involved in model development. In addition, they sponsored the steps necessary for application: making an R package, user manual and tutorial, publishing these on GitHub and organising user days. WALRUS is now used operationally by several Dutch water authorities and for scientific studies in the Netherlands and abroad. When developing the model, we made certain design choices. Now, after twelve years of application in water management, science and education, we re-evaluate the consequences of those choices.

The lessons can be divided into things we learned about the model’s functioning and things we learned from how people use the model. Concerning the model’s functioning, we found that keeping the model representation close to reality has advantages and disadvantages. It makes it easy to understand what happens and why, but it also causes unrealistic expectations. Certain physically based relations hampered model performance because they contained thresholds, and deriving parameter values from field observations resulted in uncertainty and discussions about spatial representativeness.

Concerning the practical use, we found that the easy-to-use, open source R package with manual was indispensable for new users. Nearly all users preferred default options over the implemented user-defined functions to allow tailor-made solutions. Parameter calibration was more difficult than expected because the feedbacks necessary to simulate the hydrological processes in lowlands increase the risk of equifinality. In addition, lack of suitable discharge data for calibration prompted the request for default parameter values. Finally, the model was subject to unintended model use, sometimes violating basic assumptions and sometimes showing unique opportunities we had not thought of ourselves.

C.C. Brauer, A.J. Teuling, P.J.J.F. Torfs, R. Uijlenhoet (2014): The Wageningen Lowland Runoff Simulator (WALRUS): a lumped rainfall-runoff model for catchments with shallow groundwater, Geosci. Model Dev., 7, 2313-2332, doi:10.5194/gmd-7-2313-2014

How to cite: Brauer, C.: Re-evaluating the WALRUS rainfall-runoff model design after twelve years of application, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-21915, https://doi.org/10.5194/egusphere-egu26-21915, 2026.

Posters virtual: Fri, 8 May, 14:00–18:00 | vPoster spot 5

The posters scheduled for virtual presentation are given in a hybrid format for on-site presentation, followed by virtual discussions on Zoom. Attendees are asked to meet the authors during the scheduled presentation & discussion time for live video chats; onsite attendees are invited to visit the virtual poster sessions at the vPoster spots (equal to PICO spots). If authors uploaded their presentation files, these files are also linked from the abstracts below. The button to access the Zoom meeting appears just before the time block starts.
Discussion time: Fri, 8 May, 16:15–18:00
Display time: Fri, 8 May, 14:00–18:00
Chairpersons: Ignacio Aguirre, Anita Di Chiara, Zoltán Erdős

EGU26-20122 | ECS | Posters virtual | VPS1

Developing Matrix-Matched Empirical Calibrations for EDXRF Analysis of Peat-Alternative Growth Media 

Thulani De Silva, Carmela Tupaz, Maame Croffie, Karen Daly, Michael Gaffney, Michael Stock, and Eoghan Corbett
Fri, 08 May, 14:21–14:24 (CEST)   vPoster spot 5

A key reason for the widespread use of peat-based growth media in horticulture is their reliable nutrient availability when supplemented with fertilisers. However, due to environmental concerns over continued peat-extraction and use, peat-alternatives (e.g., coir, wood fibre, composted bark, biochar) are increasingly being used commercially. These alternative media often blend multiple materials, making it crucial to understand elemental composition and nutrient interactions between components. This study evaluates whether benchtop Energy Dispersive X-ray Fluorescence (EDXRF) can provide a rapid method for determining the elemental composition of peat-alternative components.

Representative growing media components (peat, coir, wood fibre, composted bark, biochar, horticultural lime, perlite, slow-release fertilisers, and trace-element fertiliser) were blended in different ratios to generate industry-representative mixes. Individual components and prepared mixes were dried and milled to ≤80 μm. An industry-representative mix (QC-50: 50% peat, 30% wood fibre, 10% composted bark, 10% coir, with fertiliser and lime additions) and 100% peat were analysed by EDXRF (Rigaku NEX-CG) for P, K, Mg, Ca, S, Fe, Mn, Zn, Cu and Mo, and compared against ICP-OES reference measurements. The instrument’s fundamental parameters (FP) method using a plant-based organic materials library showed large discrepancies relative to ICP-OES (relative differences: 268–390 084%) for most elements in both QC-50 and peat, with the exception of Ca in QC-50 (11%). These results confirm that the FP approach combined with loose-powder preparation is unsuitable for accurate elemental analysis of organic growing media.

An empirical calibration was subsequently developed using 18 matrix-matched standards (CRMs, in-house growing media and individual component standards). Matrix matching is challenging because mixes are mostly organic by volume, yet variable inorganic amendments (e.g., lime, fertilisers, and sometimes perlite) can strongly influence XRF absorption/enhancement effects. Calibration performance was optimised iteratively using QC-50 as the validation sample, until relative differences were <15% for all elements. When applied to 100% peat, agreement with ICP-OES results improved substantially for some macro-elements (e.g. Mg 10%, Ca 1%, S 19%) but remained poor for most trace elements (28–96%), demonstrating limited transferability of this calibration method across different elements and matrices tested.

Overall, these results demonstrate that loose powder preparation does not provide sufficiently robust accuracy for EDXRF analysis of organic growing media even with meticulous empirical matrix-matched calibration. We are therefore developing a pressed pellet method using a low-cost wax binder to improve sample homogeneity (packing density) and calibration transferability. Twenty unknown mixes will be analysed using both loose powder and pressed-pellet calibrations, and agreement with reference data (ICP-OES) will confirm method validation, supporting the development of EDXRF as a novel approach for growing media analysis.

How to cite: De Silva, T., Tupaz, C., Croffie, M., Daly, K., Gaffney, M., Stock, M., and Corbett, E.: Developing Matrix-Matched Empirical Calibrations for EDXRF Analysis of Peat-Alternative Growth Media, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-20122, https://doi.org/10.5194/egusphere-egu26-20122, 2026.

Please check your login data.