Spatio-temporal datasets are constantly growing in size, due to increases in extent and resolution. Because of
this, existing software to read, store, and write datasets, and translate the data may not be able to perform
the work in a timely manner anymore. This limits the potential of numerical simulation models and machine
learning models, for example.
In this session we bring together researchers working on novel software for processing large spatio-temporal
datasets. By presenting their work to their colleagues we aim to further strengthen the field of
high-performance computation in the geosciences.
We invite everybody recognizing the problem and working on ways to solve it to submit an abstract to this
session. Possible topics include, but are not limited to:
- High-performance computing, parallel computing, distributed computing, cloud computing, asynchronous
computing, accelerated computing, green computing
- Algorithms, libraries, frameworks
- Parallel I/O, data models, data formats, data compression, data cubes, HDF5, netCDF, Zarr, COG
- Containerization, Docker, Kubernetes, Singularity, Apptainer
- Physically based modelling, physics informed machine learning, surrogate modelling
- Model coupling, model workflow management
- Large scale hydrology, remote sensing, climate modelling
- Lessons learned from case-studies
We recommend authors to highlight those (generic) aspects of their work that may be especially of interest to
their colleagues.
High-performance computation in the geosciences
Co-organized by HS13
Convener:
Kor de JongECSECS
|
Co-conveners:
Davide ConsoliECSECS,
Daniel Caviedes-Voullième,
Arnau Folch,
Corentin Carton de Wiart