Recent advances in machine learning are transforming weather and climate science, from the emergence of large‑scale foundation models (e.g. Aurora, ORBIT, WeatherGenerator and Walrus) to the rapid development of explainable and trustworthy AI methods that aim to make these models scientifically credible and operationally usable. This session brings together contributions on the development, evaluation, and application of large‑scale and foundation‑style machine learning systems, alongside state‑of‑the‑art research on interpretability, trust, diagnostics, and validation of ML models across Earth system applications. We welcome studies that address the methodological and scientific challenges associated with pre‑training and scaling ML models on diverse atmospheric and climate datasets; the assessment of training strategies, physical consistency, and model behaviour at scale; and post/pre‑training adaptation approaches such as fine‑tuning, distillation, and latent‑space steering. We equally encourage contributions that advance explainable AI (XAI) for weather and climate science, including feature attribution, causal inference, model bias diagnosis, uncertainty communication, human‑in‑the‑loop validation, and stakeholder‑oriented interpretability. Contributions that develop scalable, robust XAI techniques for high‑dimensional geoscientific problems are particularly welcome. By bridging foundation‑model development with explainability, trust, and scientific insight, this session aims to support a more transparent, reliable, and physically grounded development of ML tools for weather, climate, and environmental applications that push the boundary in terms of skill and quality.
Development and Explainability of Large Scale and Foundations Models for Weather and Climate
Convener:
Christian Lessig
|
Co-conveners:
Todd Jones,
Tom Dunstan,
Anna-Louise Ellis,
Sebastian Hickman,
Ilaria Luise,
Sebastian Schemm