SM2.3 | Quantifying and Interpreting Uncertainties in Seismic Tomography
Quantifying and Interpreting Uncertainties in Seismic Tomography
Co-organized by GD4
Convener: Auggie MarignierECSECS | Co-conveners: Sixtine DromignyECSECS, Adrian Marin Mag, Paula Koelemeijer
Posters on site
| Attendance Tue, 05 May, 14:00–15:45 (CEST) | Display Tue, 05 May, 14:00–18:00
 
Hall X1
Tue, 14:00
Seismic data come in many forms, from raw waveforms to tomographic models. Throughout acquisition, processing, and inversion, uncertainties propagate and obscure our understanding of Earth's interior and subsurface. Quantifying and interpreting these uncertainties is vital for robust geological and geodynamical inferences.
In seismic tomography and imaging, uncertainties are often described by resolution metrics, such as resolution matrices or resolving kernels, or by summary statistics derived from posterior samples via Bayesian methods. Recently, machine learning techniques—including variational inference, learned distributions, and likelihood-free approaches—have been introduced to quantify uncertainty, offering promising alternatives. However, fully understanding the meaning of these uncertainties, their interactions, and their influence on model interpretation remains a major challenge.
Once quantified, how do these uncertainties affect downstream applications in geodynamics, mineral physics, or earthquake hazard assessment? Are tomographic inferences reliable enough to support these fields, or do uncertainties limit our conclusions?
Beyond observational data, other uncertainty sources—such as model parameterisation, prior assumptions, and the choice of forward models—add complexity. How do these modelling choices influence the recovered Earth structure and its uncertainties? How can we distinguish genuine Earth features from modelling artefacts?
This session invites contributions that:
• Develop or apply novel methods for quantifying uncertainty in seismic tomography,
• Explore how uncertainty impacts Earth structure interpretations,
• Compare different uncertainty quantification approaches,
• Address model validation and benchmarking amid uncertainty,
• Investigate how tomographic uncertainties propagate into fields like geodynamics, mineral physics, or hazard modelling.
We welcome studies covering global and regional scales, body-wave and surface-wave tomography, full-waveform inversion, ambient noise imaging, and any seismic method where uncertainty is crucial. Cross-disciplinary and innovative methodological contributions are particularly encouraged.

Posters on site: Tue, 5 May, 14:00–15:45 | Hall X1

The posters scheduled for on-site presentation are only visible in the poster hall in Vienna. If authors uploaded their presentation files, these files are linked from the abstracts below.
Display time: Tue, 5 May, 14:00–18:00
X1.118
|
EGU26-780
|
ECS
Xuebin Zhao and Andrew Curtis

Seismic full waveform inversion (FWI) is a powerful technique that uses seismic waveform data to generate high resolution images of the Earth's interior. However, significant uncertainties exist in FWI solutions due to imperfect acquisition geometries, inherent noise in the data, and the nonlinearity of the forward problem. Probabilistic Bayesian FWI estimates the family of all possible model solutions and quantifies their uncertainties by calculating the so-called posterior probability density function (pdf) of model parameter values of interest. In a linearised framework, the posterior pdf can be represented as a Gaussian distribution centred around the maximum a posteriori (MAP) solution, and the associated uncertainties are described by an a posteriori covariance matrix derived from the inverse Hessian matrix. Recent advancements have introduced nonlinear methods, such as variational inference, to solve Bayesian FWI problems efficiently. Their solutions quantify full uncertainties including those created by the nonlinearity of the problem. In this study, we apply both linearised and fully nonlinear methods to 2D acoustic Bayesian FWI problems. In particular, we use a physically structured variational inference algorithm for the nonlinear case, in which a transformed Gaussian distribution is optimised to approximate the full posterior pdf, such that the results can be compared fairly with those from the linearised, Gaussian-based method. We also employ an independent nonlinear variational algorithm – Stein variational gradient descent – for validation. The results show that while both linearised and nonlinear methods adequately recover the posterior mean models, they exhibit significantly different posterior uncertainty structures, especially at layer interfaces, due to the linearisation of wave physics. In addition, we show that linearised uncertainties are inaccurate since they can not fit observed waveform data and they yield biased estimates of inferred meta-properties such as volumes of geological bodies. This work therefore justifies the application of fully nonlinear inversion methods in Bayesian FWI if accurate uncertainty estimates are needed.

How to cite: Zhao, X. and Curtis, A.: Uncertainty Quantification in Full Waveform Inversion: Linearised versus Fully Nonlinear Methods, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-780, https://doi.org/10.5194/egusphere-egu26-780, 2026.

X1.119
|
EGU26-1022
|
ECS
Auggie Marignier, Ben Lambert, Malcolm Sambridge, and Paula Koelemeijer

Tomography models of the inner core commonly assume a model of anisotropy that is transversely isotropic with a fast direction parallel to the Earth's rotation axis.  However, decades of debate have proposed various forms of this anisotropy, including the presence of a distinct innermost inner core or rotations of the direction of the fast axis.  These assumputions have yet to be directly compared to determine which is best supported by the available data.  Bayesian model comparison via the Bayesian Evidence provides this assessment but has historically been difficult to calculate, particularly in high-dimesional settings, and thus largely ignored in seismic tomography. New machine learning-based techniques can now be used to estimate the Evidence with greater stability and less uncertainty. In this work I demonstrate various methods, including the Savage-Dickey Density Ratio, the learnt harmonic mean estimator and trans-conceptual sampling, applied to the comparison of inner core anisotropy tomography models.

How to cite: Marignier, A., Lambert, B., Sambridge, M., and Koelemeijer, P.: Bayesian Model Comparison of Inner Core Anisotropy Models, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1022, https://doi.org/10.5194/egusphere-egu26-1022, 2026.

X1.120
|
EGU26-1642
|
ECS
Noami Kaplunov and Andreas Fichtner

Solving inverse problems allows for the estimation of system properties or model parameters that cannot be measured directly. However, the models arising from various experts differ more than their individual uncertainty estimates might suggest [1]. A crucial reason for this is the combined impact of many small subjective choices undertaken during the inversion procedure. This includes a) the selected subset of data, b) the model space parametrisation, c) the type of forward model, d) the chosen numerical and optimisation methods, and e) regularisation.

In this work, we present an adapted Bayesian inference method that explicitly incorporates these subjective choices – collectively referred to as the control parameters – as a random variable to obtain estimates of ensemble statistics. It can be shown that, while the ensemble model me may be computed simply by averaging over N individual models mi (i = 1, 2, ..., N), the ensemble covariance Ce consists of a sum of two terms,

The first term represents the mean of individual posterior covariances Ci, and the second term represents the variance of the mean models.

The theoretical developments are illustrated with a novel small-scale "Community Monte Carlo" experiment, where a group of experts was asked to select suitable regularisation (tuning) parameters to obtain a solution to a linear straight-ray tomography problem. The regularisation parameters include, e.g., the data prior standard deviation σD and the model prior standard deviation σM.

Crucially, the computation of ensemble estimates reveals that the average of individual covariances – the first term in eq. (1) – dominates the ensemble covariance. This is due to individuals favouring smaller values of σM, resulting in similar-looking models that deviate minimally from the prior, and larger data errors σD, leading to comparatively large posterior covariance matrices tending towards the prior model covariance.

Our proof-of-concept suggests that the field of seismic tomography should not strive for consensus among models, which risks condensing the ensemble and producing overly optimistic uncertainty estimates. Instead, the diversity of expert-derived models can be seen as an opportunity for "Community Monte Carlo," emphasising the need to actively explore a broader range of plausible subjective choices and rigorously quantify their effect on model uncertainty.

References:

[1] Andreas Fichtner, Jeroen Ritsema, Solvi Thrastarson; A high-resolution discourse on seismic tomography. Proc. A 1 August 2025; 481 (2320): 20240955. https://doi.org/10.1098/rspa.2024.0955

How to cite: Kaplunov, N. and Fichtner, A.: A Community Monte Carlo Approach for Quantifying Subjectivity-Driven Ensemble Uncertainty in Inverse Problems , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1642, https://doi.org/10.5194/egusphere-egu26-1642, 2026.

X1.121
|
EGU26-2663
Malcolm Sambridge, Andrew Valentine, and Juerg Hauser

Over the past several decades Trans-dimensional Bayesian sampling has been widely applied in the geosciences. Most implementations have used the Reversible-jump Markov chain Monte Carlo (Rj-McMC) algorithm. This approach allows sampling across variably dimensioned model parameterizations and hierarchical noise models. Due to practical limitations Reversible-Jump is restricted to cases where the number of free parameters changes in a regular sequence, usually by addition or subtraction of a single variable. Furthermore, jumps between model dimensions rely on bespoke mathematical transformations that are only valid within a particular parametrization class. As a result, the range of model classes that can be practically considered is limited, and McMC balance conditions must be rederived for each class of problem. A framework for Trans-conceptual Bayesian sampling, which is a generalization of trans-dimensional sampling, is presented. Trans-C Bayesian inversion allows exploration across a finite, but arbitrary, set of conceptual models, i.e. ones where the number of variables, the type of model basis function, nature of the forward problem, and even assumptions on the class of measurement noise statistics, may all vary independently.

A key feature of the new framework is that it avoids parameter transformations and thereby lends itself to development of automatic McMC algorithms, i.e. where the details of the sampler do not require knowledge of the parameterization details. Algorithms implementing Bayesian conceptual model sampling are illustrated with examples drawn from geophysics, using real and synthetic data. Comparison with reversible-jump illustrates that trans-C sampling produces statistically identical results for situations where the former is applicable, but also allows sampling in situations where trans-D would be impractical, including asking the data to choose between competing forward models.

How to cite: Sambridge, M., Valentine, A., and Hauser, J.: Trans-Conceptual Inversion: Bayesian Inference with Competing Assumptions, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-2663, https://doi.org/10.5194/egusphere-egu26-2663, 2026.

X1.122
|
EGU26-3644
Malcolm Sambridge, Jiawen He, Kit Chaivannacoopt, Juerg Hauser, Michael Koch, Fabrizio Magrini, Augustin Marignier, and Andrew Valentine

Inference problems within the geosciences vary considerably in terms of size and scope, ranging from the detection of changepoints in 1D time/depth models, to the construction of complex 3D or 4D models of the Earth. Solving an inverse problem typically requires fusing various classes of data, each associated with its own forward model. The choice of an appropriate inference method is itself not obvious. An investment of much time effort and is required in software development and education. Many researchers have developed bespoke inversion and parameter estimation algorithms tailored to their specific needs. Associated software is then typically bespoke to the particular application, often requiring significant investment by new researchers to master with minimal documentation. This is entirely understandable as generalisation and ongoing support of inference codes requires significant time and effort that is frequently beyond the primary objectives of the research. As a result the all important experimentation often required to choose an appropriate inversion method for a new data set or domain, is often not practical. Furthermore design choices made in existing software implementations often dictate those by subsequent researchers and influence the scientific direction taken.

 

The Common Framework for Inference, CoFI, is an open source project which aims to capture inherent commonalities present in all types of inverse problems, independent of the specific methods employed to solve them. CoFl is codifies the definition of an inference problem and then provides an interface to reliable and sophisticated third-party packages, such as SciPy and PyTorch, to tackle inverse problems across a broad range. The modular and object-oriented design of CoFI, supplemented by a comprehensive suite of tutorials and practical examples, ensures its accessibility to users of all skill levels, from experts to novices. This not only has the potential to streamline research and promote best practice but also to support education and STEM training. This poster gives an overview of CoFl through domain relevant examples, from optimisation to probabilistic sampling.  With a focus on CoFI’s modular approach we hope to foster collaboration centred around interaction by expanding the set of inference algorithms and domain-relevant examples.

 

How to cite: Sambridge, M., He, J., Chaivannacoopt, K., Hauser, J., Koch, M., Magrini, F., Marignier, A., and Valentine, A.: CoFI - The Common Framework for Inference: A software platform for experimentation, education and application of geophysical inversion tools, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3644, https://doi.org/10.5194/egusphere-egu26-3644, 2026.

X1.123
|
EGU26-5013
|
ECS
Marin Adrian Mag, David Al-Attar, Paula Koelemeijer, and Christophe Zaroli

A fundamental challenge in inverse problems is non-uniqueness: many models may fit a given data set exactly, or within observational uncertainty. A common remedy is the introduction of additional constraints encoding prior beliefs about the true model. Such explicit regularization mechanisms typically receive the most attention in inverse-problem research. However, it is well known that inversions may also be influenced by implicit sources of regularization. Discretization is perhaps the most prominent example, often introducing unintended and opaque prior assumptions.

 

Discretizations are commonly adopted for computational convenience and to avoid the theoretical complexities associated with models defined as functions. This practice, if done too early, can obscure fundamental questions concerning probabilities, regularity, and boundary behavior of the model. Although discretization may appear to eliminate these difficulties, it in fact makes choices for us that are often left unexamined.

 

In this contribution, we demonstrate that for linear(ised) problems in seismology, the undiscretized formulation can be treated rigorously using well-established theoretical tools. This perspective exposes hidden assumptions embedded in standard inversion workflows and allows prior choices to be made explicit and transparent. Although discretization is unavoidable in practice, we show that how and when it is introduced plays a crucial role, both for ensuring correct convergence and for computational efficiency.

 

Rather than attempting a fully rigorous solution of infinite-dimensional inverse problems—which can be expensive—we focus instead on probabilistic linear inference. Unlike classical inversion, linear inference targets specific properties of the model rather than a particular model realization. These quantities of interest are exactly representable in finite-dimensional spaces without discretizing the model itself. As a result, our framework delivers complete and mathematically consistent answers at reduced computational cost. We illustrate the proposed approach with synthetic inversion and inference examples in 1D.

 

How to cite: Mag, M. A., Al-Attar, D., Koelemeijer, P., and Zaroli, C.: Think first, discretize later, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-5013, https://doi.org/10.5194/egusphere-egu26-5013, 2026.

X1.124
|
EGU26-7770
|
ECS
Steve Carr and Tolulope Olugboji

The Earth’s mid-mantle (800–1200 km depth) hosts enigmatic seismic discontinuities whose physical origins remain debated. Competing hypotheses attribute these features to either thermal anomalies, such as partial melting due to volatile transport, or compositional heterogeneities associated with ancient subducted crust. However, distinguishing between these scenarios remains a challenge; standard imaging techniques often fail to robustly resolve the polarity of weak seismic reflections amidst noise and reverberations that contaminate mid-mantle reflections. Consequently, previous global surveys relying on linear time-domain stacking have yielded only a fragmented perspective, where the global connectivity, topography, and distinct physical origin of mid-mantle discontinuities remain debated. To address these limitations, we present a new global imaging framework that integrates curvelet-based wavefield separation and deconvolution, with probabilistic array processing. Rather than relying on traditional linear stacking, we develop "probabilistic vespagrams" that rigorously account for uncertainties in signal coherence and wavelet estimation. This approach allows us to distinguish robust structural features from processing artifacts. We apply this workflow to a global dataset of SS and PP precursors to construct a probability map of mid-mantle discontinuities. By systematically quantifying the likelihood of positive (eclogitic/compositional) versus negative (thermal/melt) impedance contrasts globally, we aim to resolve the global distribution of mid-mantle heterogeneities and determine the relative dominance of compositional stratification versus partial melting by water transport in controlling deep-Earth dynamics

How to cite: Carr, S. and Olugboji, T.: Earth’s Mid-mantle via Probabilistic Array Imaging , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-7770, https://doi.org/10.5194/egusphere-egu26-7770, 2026.

X1.125
|
EGU26-11966
Andrew Curtis, Klaus Mosegaard, and Xuebin Zhao

Geoscientists often solve inverse problems to estimate values of parameters of interest given relevant data sets. Bayesian inference solves these problems by combining probability distributions that describe uncertainties in both observations and unknown parameters, and we require that the solution provides unbiased uncertainty estimates in order to inform evidence- or risk-based decisions. It has been known for over a century that employing different, but equivalent parametrisations of the same information can yield conditional probabilities that are mathematically inconsistent, a property referred to as the BK-inconsistency. Recently this inconsistency was shown to invalidate the solutions to physical problems found using several well-established methods of Bayesian inference. This talk explores the extent to which this inconsistency affects solutions to common geophysical problems. We demonstrate that changes in parametrisations result in inconsistent conditional prior probability densities, even though they represent exactly the same prior information. These inconsistent prior distributions can change Bayesian posterior solutions dramatically across various geoscientific problems including seismic impedance inversion, surface wave dispersion inversion, and travel time tomography, using real and synthetic data. Significantly different posterior statistics are obtained, including for maximum a posteriori (MAP) solutions, mean estimates, standard deviations, and full posterior distributions. Given that deterministic inversion is often equivalent to finding the MAP solution to specific Bayesian problems (the mathematical equations to be solved are identical), the BK-inconsistency also results in inconsistent solutions to deterministic inverse problems. Indeed, we show that solutions can potentially be designed, simply by changing the parametrisation. This study highlights that a careful rethinking of Bayesian inference and deterministic inversion may be required in physical problems, and we present one possible consistent method of solution.

How to cite: Curtis, A., Mosegaard, K., and Zhao, X.: Reverend Bayes, we have a problem - and a solution, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-11966, https://doi.org/10.5194/egusphere-egu26-11966, 2026.

X1.126
|
EGU26-21245
|
ECS
Tong Li, Xi Li, Xinsong Wang, Wei Zhang, Ying Liu, and Huajian Yao

Reliable evaluation of three-dimensional seismic velocity models is critical for wave-propagation simulations and seismic hazard assessment, yet remains challenging in tectonically complex regions where source-related uncertainties and strong lateral heterogeneity limit conventional validation approaches. Here we present a propagation-centered framework for velocity-model evaluation based on reverse-time source-point gathers constructed from ambient-noise–derived surface waves.

Empirical Green’s functions are retrieved from ambient-noise cross correlations and reverse-time propagated under candidate velocity models to virtual source locations. If a velocity model adequately captures the kinematic characteristics of wave propagation, back-propagated wavefields from different azimuths and offsets refocus coherently near the zero-time reference. Systematic time shifts in the reverse-time source-point gathers indicate kinematic inconsistencies and reveal velocity biases. We quantify this behavior using the arrival-time deviation, Δt, and analyze its dependence on period, offset, and azimuth.

We apply this framework to evaluate four recently developed three-dimensional S-wave velocity models in the Sichuan–Yunnan region, southwest China. The results reveal clear model-dependent patterns in back-propagated arrivals across multiple period bands. At short periods (5–10 s), the back-propagated arrivals are more scattered in time and exhibit stronger directional variability, indicating that the tested velocity models still have limited accuracy in representing shallow structures and local-scale heterogeneity. In contrast, at longer-period waves (15–45 s), the back-propagated wavefields refocus more coherently at the virtual source, with arrival times clustering closer to zero, suggesting that the models are able to reproduce the large-scale characteristics of wave propagation more consistently. Consistent trends observed across multiple virtual source locations highlight both regional-scale performance differences and azimuth-dependent kinematic biases among the tested models.

The proposed reverse-time source-point gather approach offers a source-robust and physically intuitive perspective for velocity-model evaluation. By emphasizing kinematic self-consistency of wave propagation rather than detailed waveform matching, this framework complements existing evaluation methods and provides a flexible tool for diagnosing the strengths and limitations of three-dimensional velocity models in structurally complex regions.

How to cite: Li, T., Li, X., Wang, X., Zhang, W., Liu, Y., and Yao, H.: A reverse-time source-point gather framework for evaluating 3-D crustal velocity models using ambient-noise surface waves, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-21245, https://doi.org/10.5194/egusphere-egu26-21245, 2026.

X1.127
|
EGU26-21968
|
ECS
Scott Keating, Andrea Zunino, and Andreas Fichtner

Full-waveform inversion (FWI) is capable of providing high-resolution Earth models, but quantifying uncertainty in these models remains a challenging and costly endeavour. The computational limitations of FWI mean that practical uncertainty estimates are necessarily based on highly incomplete information. This makes significant the difference between aggressive approaches, which systematically underestimate uncertainty, and conservative approaches, which systematically overestimate it. Here, we investigate an approach for inexpensive, conservative uncertainty quantification for high-dimensional FWI problems [1].  

 

This uncertainty quantification strategy is based on truncated singular value decomposition of the inverse problem Hessian. It takes as input a set of model and gradient pairs, which can, but do not have to be the inversion update history. This machinery can be used for both standard deviation estimation and hypothesis testing, using a targeted nullspace shuttling approach. In addition to its flexibility,  comparatively low cost and large-problem scaling, a key advantage of this approach is its conservativism; it provides a guarantee that the estimated uncertainty is greater than that which would be achieved with a full-rank Hessian estimate.

 

[1] Keating, S., Zunino, A., & Fichtner A, 2026. A comparison of rank-reduction strategies for uncertainty estimation in full-waveform inversion. Accepted for publication in Geophysical Journal International

How to cite: Keating, S., Zunino, A., and Fichtner, A.: Rank-reduction based standard deviation estimation and shuttling for FWI, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-21968, https://doi.org/10.5194/egusphere-egu26-21968, 2026.

Please check your login data.