ESSI1.1 | Large scale and foundations models for weather and climate
EDI
Large scale and foundations models for weather and climate
Convener: Christian Lessig | Co-conveners: Sebastian Hickman, Ilaria Luise, Sebastian Schemm, Sophie Xhonneux

Large-scale machine learning models for atmospheric and Earth system dynamics, also known as foundation models, are currently being developed. Examples include Aurora, ORBIT, the WeatherGenerator, and the developments as part of the SwissAI initiative. When compared to machine learning weather forecasting models, foundation models present a unique set of challenges and opportunities. For instance, when training a model on numerous datasets, questions arise regarding the selection of these datasets, the way they should be integrated during training, and the assessment of training efficacy. Additionally, large, pre-trained machine learning models require adaptation to specific applications through techniques such as fine-tuning and distillation, which aims to streamline the model's parameters for optimal performance. In the field of weather and climate science, the potential of post-pre-training methods has yet to be fully explored. To ensure the dissemination of knowledge and the exchange of insights within the community, it's essential to share and discuss the lessons learned during pre-training at scale. The current session collects contributions on the development of large-scale machine learning models and their application to specific problems. The session encourages submissions that address methodological questions related to the development or evaluation of large-scale models (e.g., conservation of physics), as well as studies on the technical aspects (e.g., training on hybrid HPC systems) or that focus on specific applications.

Please check your login data.