ESSI2.5 | Bridging Earth Science Research through Integrated e-Infrastructures and Virtual Research Environments (VREs): From Digital Services to Digital Twins
EDI
Bridging Earth Science Research through Integrated e-Infrastructures and Virtual Research Environments (VREs): From Digital Services to Digital Twins
Convener: Massimiliano Assante | Co-conveners: Christian Pagé, Magdalena Brus, Lesley Wyborn, Chris AthertonECSECS, Jacco Konijn, Eugenio Trumpy
Orals
| Thu, 07 May, 10:45–12:30 (CEST)
 
Room -2.33
Posters on site
| Attendance Fri, 08 May, 10:45–12:30 (CEST) | Display Fri, 08 May, 08:30–12:30
 
Hall X4
Posters virtual
| Mon, 04 May, 14:00–15:45 (CEST)
 
vPoster spot 1b, Mon, 04 May, 16:15–18:00 (CEST)
 
vPoster Discussion
Orals |
Thu, 10:45
Fri, 10:45
Mon, 14:00
Scientific discovery today increasingly depends on the availability of digital services and infrastructures that span the entire research workflow. While sensors, simulations, and lab experiments produce massive data, many tools for analysis remain fragmented in stand-alone systems, often hindering collaboration and a comprehensive understanding of complex Earth systems.

To address this, e-Infrastructures and Virtual Research Environments (VREs) are revolutionising how research is conducted. By providing a cohesive ecosystem, these platforms allow researchers from diverse disciplines to manage the research lifecycle: from data acquisition and processing to modeling and dissemination in the spirit of Open Science. This integration enables the research community to transition from isolated tools to interoperable systems like Digital Twins.

This session aims to highlight how interoperable e-Infrastructure services can be used to build VREs and Virtual Labs to provide end-to-end support, strengthening research capacity through collaboration between service providers and scientists. We bring together case studies and new approaches from all domains of the Earth sciences, focusing on both technological implementations and scientific applications.

Contributions in this session will:
- Demonstrate practical examples of how digital services, VREs, and e-infrastructures enhance research workflows in Earth and environmental sciences.
- Present innovative approaches to integrating tools across domains and providers, including outcomes from collaborative projects, virtual laboratories, and digital twins.
- Highlight technical implementations, including research software applications, semantic approaches, modeling practices, and the management of large-scale data.
- Share lessons learned from user-driven design, community engagement, training and support strategies.
- Address challenges of interoperability and sustainability in distributed digital services, highlighting pathways to foster collaboration across infrastructures and research domains.

By bringing together service providers, research infrastructures, and end-users, this session will provide a unique overview of the digital landscape and its impact on science. It will foster dialogue on how different infrastructures can collaborate more effectively to provide integrated, sustainable solutions, embedding Open Science principles across the research lifecycle, and advance both science and society.

Orals: Thu, 7 May, 10:45–12:30 | Room -2.33

The oral presentations are given in a hybrid format supported by a Zoom meeting featuring on-site and virtual presentations. The button to access the Zoom meeting appears just before the time block starts.
Chairpersons: Massimiliano Assante, Christian Pagé
10:45–10:50
10:50–11:00
|
EGU26-16239
|
On-site presentation
Tim Rawling, Lilli Freda, Rebecca Bendick, Elisabetta D’Anastasio, Helen Glaves, Rebecca Farrington, Federica Tanlongo, and Shelley Stall

Periods of natural disaster, political instability, and systemic disruption pose acute risks to the preservation, integrity, and accessibility of Earth science data. As research infrastructures and the datasets they curate become increasingly digital, interconnected, and critical to informed decision-making, safeguarding data against loss, politicisation, and fragmentation has emerged as a shared global responsibility. Here we will outline how Global Research Infrastructures (GRI’s), can contribute to a coordinated international response to data preservation during times of crisis.  We will draw on the work currently being done in an international collaboration between four national Earth science e-Infrastructures: AuScope (Australia), EPOS (European Plate Observing System), EarthScope (USA), and Earth Science New Zealand.

AuScope occupies a distinctive position within the global ecosystem of Earth science research infrastructures. As a nationally funded yet internationally connected infrastructure, AuScope combines trusted governance, mature data services, and a strong culture of open science across geophysics, geodesy, geochemistry, and geohazards. Through formal and informal partnerships with EPOS, EarthScope, and Earth Science New Zealand, AuScope is well placed to act as a node for resilient, distributed Earth and environmental science data stewardship.

We will discuss how GRI’s could collaboratively support: (1) distributed and redundant preservation of high-value Earth science datasets across jurisdictions; (2) continuity of standards, metadata, and persistent identifiers to ensure long-term usability of data even when originating institutions are disrupted; and (3) trusted custodianship arrangements that protect data integrity and provenance from external interference, institutional failure, hostile cyberattacks or adverse natural disasters. Such a networked approach will reduce single-point-of-failure risks and strengthen the resilience of the global Earth science data ecosystem.

AuScope’s local contribution currently includes providing geographically distinct replication capacity, harmonised metadata and FAIR-aligned services, and operational expertise in federated data platforms. Working with EPOS and EarthScope’s established thematic and domain services, and with Earth Science New Zealand’s regional leadership in hazard-focused data, this partnership can enable rapid “data rescue” responses, temporary custodianship during crises, and sustained access for displaced or affected research communities.

This collaboration demonstrates how globally networked research infrastructures can move beyond coordination to active mutual support in times of crisis. By leveraging complementary capabilities, shared standards, and trusted governance, a GRI for solid earth sciences can help ensure that critical Earth science data remain preserved, accessible, and scientifically reliable—regardless of natural, global or institutional instability—thereby supporting evidence-based decision-making and long-term societal resilience.

How to cite: Rawling, T., Freda, L., Bendick, R., D’Anastasio, E., Glaves, H., Farrington, R., Tanlongo, F., and Stall, S.: A collaborative international approach to data preservation and sustainability for the solid earth sciences. , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-16239, https://doi.org/10.5194/egusphere-egu26-16239, 2026.

11:00–11:10
|
EGU26-7865
|
On-site presentation
Ulrich Bundke, Daniele Bailo, Claudio Dema, Dario De Nart, Delphine Dobler, Federico Drago, Marta Gutierrez David, Anca Hienola, Andreas Petzold, Alex Vermeulen, and Zhiming Zhao

Modern Environmental and Earth sciences demand seamless integration across atmospheric, marine, terrestrial, and biodiversity data, and this process is often hindered by disciplinary silos. The ENVRI-Hub addresses this challenge directly by serving as the unified Virtual Research Environment (VRE) for Europe's Environmental Research Infrastructures. It moves beyond a simple data portal to function as an integrated platform where discovery, access, and analysis converge.

The hub provides researchers with a centralised gateway to discover and access FAIRified research assets, enabled for cross-disciplinary work. Crucially, it enables these assets to be leveraged in situ through any VRE using a unified machine actionable API/toolset. , in order to support data analytics in scientific workflows. VREs will allow users to compose, execute, and share reproducible analytical pipelines - from accessing Essential Climate Variables (ECVs) to running complex AI analytics. This architecture not only streamlines the scientific process but also underpins applications like environmental Virtual Laboratories and foundational work for future applications like Digital Twins.

This presentation will detail the ENVRI-Hub's technical architecture for enabling VRE support. We will demonstrate, through specific scientific use cases, how its Catalogue of Services and AI-powered Knowledge Base work synergistically to reduce data friction. The contribution will highlight how this integrated environment supports workflow builders in creating robust, cross-domain analyses, thereby accelerating scientific results and advancing collaborative, data-driven Environmental and Earth science.

How to cite: Bundke, U., Bailo, D., Dema, C., De Nart, D., Dobler, D., Drago, F., Gutierrez David, M., Hienola, A., Petzold, A., Vermeulen, A., and Zhao, Z.: The ENVRI-Hub: A Platform for Advancing Environmental and Earth Sciences through Integrated Virtual Research Environments, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-7865, https://doi.org/10.5194/egusphere-egu26-7865, 2026.

11:10–11:20
|
EGU26-1703
|
On-site presentation
Jens Klump, Alex Hunt, Vincent Fazio, and Pavel Golodoniuc

The AuScope Virtual Research Environment (AVRE) is a platform advancing geoscience research through integrated data access, analysis, and interoperability. Having evolved from the original AuScope GRID, AVRE now underpins the Data Lens of the Downward-Looking Telescope (DLT) that describes AuScope as a national research infrastructure for the geosciences. AVRE gives researchers a unified access to geoscience data from other AuScope programmes and from the government geological survey organisations.

Looking forward, AVRE is enhancing the findability, accessibility, and interoperability of its dataset catalogue through a new Python package and QGIS plugin. Significant additions to the AVRE services portfolio are the AuScope Data Repository, Sample Repository, Instrument Register, and Research Activity Identifier (RAiD) Register. The findability of resources will be improved by implementing natural language search powered by large language models. These innovations, together with continued integration of new catalogues and repositories, alongside robust user engagement and analytics, will ensure AVRE remains a cornerstone for collaborative, data-driven geoscience in Australia.

How to cite: Klump, J., Hunt, A., Fazio, V., and Golodoniuc, P.: Innovations and Future Directions in the AuScope Virtual Research Environment, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1703, https://doi.org/10.5194/egusphere-egu26-1703, 2026.

11:20–11:30
|
EGU26-11024
|
On-site presentation
Andrea Manzi and Ville Tenhunen

Research Infrastructures (RIs) are at the core of data-intensive and computation-driven science, yet they face growing challenges in managing complexity, scalability, interoperability, and the effective integration of Artificial Intelligence and Digital Twin technologies. This contribution presents two complementary examples of EU projects both led by EGI Foundation, that address these challenges from the perspective of RI needs: interTwin, which has delivered a prototype of a Digital Twin Engine (DTE) for science, and RI-SCALE, which is developing the next generation of scalable data exploitation capabilities for RIs.

As a first example, the recently completed interTwin project demonstrated how RIs can collaborate to co-design a common blueprint architecture and an open-source Digital Twin Engine supporting the integration of models, simulations, data streams, and AI components. The project worked closely with scientific communities and infrastructures to co-design interoperable components for orchestration, provenance, quality assessment, and federated access to compute and data resources. Through multiple scientific use cases, interTwin showed how RIs can improve reproducibility, automation, and cross-domain reuse of methods and services.

As a second, forward-looking example, the recently launched RI-SCALE project focuses on empowering Research Infrastructures with scalable, AI-driven Data Exploitation Platforms (DEPs). RI-SCALE aims to support RIs in transforming vast and heterogeneous data holdings into actionable scientific knowledge by combining advanced AI frameworks, federated computing, and trusted data lifecycle management. The project places strong emphasis on co-design with RI operators and user communities, ensuring that DEPs respond to concrete operational and scientific requirements. Planned developments include mechanism for data transfer and caching, AI model hub integration, data spaces integration, and the establishment of a competence center to support adoption, training, and long-term sustainability within the RIs. 

The experiences and plans discussed in this contribution highlight key success factors for RIs digital transformation.

How to cite: Manzi, A. and Tenhunen, V.: Advanced Platforms for Research Infrastructures: Lessons from interTwin and perspectives from RI-SCALE, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-11024, https://doi.org/10.5194/egusphere-egu26-11024, 2026.

11:30–11:40
|
EGU26-13138
|
ECS
|
On-site presentation
Biagio Peccerillo, Alfredo Oliviero, Marco Procaccini, Leonardo Candela, Luca Frosini, Francesco Mangiacrapa, Giancarlo Panichi, Massimiliano Assante, and Pasquale Pagano

D4Science provides web-based Virtual Research Environments (VREs) that support FAIR, open, and reproducible science across multiple research domains, including Earth science. These environments integrate data access, computation, and collaboration services, offering powerful capabilities to researchers and enabling complex, data-intensive scientific activities within a shared digital infrastructure.

This contribution introduces a conversational intelligent assistant integrated into D4Science VREs, designed to support Earth scientists in their research activity. The assistant provides a natural language interface that helps users interact with D4Science VREs' services, locate relevant datasets and research items, obtain guidance on common tasks, and support exploratory and operational activities within the VRE.

The assistant is designed with a modular approach. The user interacts with a coordinator agent that orchestrates a multi-agent system, where specialized AI agents collaborate to perform a variety of tasks. This architecture allows the assistant to handle heterogeneous requests and to support users across different phases of their research activities, while also facilitating maintenance and extensibility.

The conversational agent adopts a Retrieval-Augmented Generation (RAG) approach that leverages the knowledge already captured by the VRE through its regular use by research communities. In fact, as VREs naturally accumulate updated knowledge created and curated by researchers over time, the assistant's knowledge base evolves incorporating new information. This way, the assistant can ground its responses in domain-specific and up-to-date information, effectively acting as a domain-aware expert embedded within the research environment.

By serving as an accessible entry point to the VRE, the assistant complements existing interfaces without altering established workflows. The presentation discusses the motivation, design choices, and integration strategy. It also presents various concrete use cases relevant to Earth scientists, demonstrating how the conversational assistant can be effectively employed to support their research activity.

How to cite: Peccerillo, B., Oliviero, A., Procaccini, M., Candela, L., Frosini, L., Mangiacrapa, F., Panichi, G., Assante, M., and Pagano, P.: A Conversational Assistant for Geoscientists in Virtual Research Environments, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-13138, https://doi.org/10.5194/egusphere-egu26-13138, 2026.

11:40–11:50
|
EGU26-10490
|
On-site presentation
Tjerk Krijger, Peter Thijsse, Dick Schaap, Robin Kooyman, and Paul Weerheim

In order to provide users with fast and easy access to multidisciplinary data originating from large collections, MARIS has developed a software system called Beacon that can, on the fly with high performance, extract specific data based on the user’s request. This software has been customised and deployed in the Blue-Cloud2026 project and several other European projects and is designed to return one single harmonised file as output, regardless of whether the input contains different data types. Beacon is fully open-source (AGPLv3) available, allowing everyone to set-up their own Beacon ‘node’ to enhance the access to their data or use existing Beacon nodes from well-known data infrastructures such as Euro-Argo, ERA5 or the World Ocean Database for fast and easy access to harmonized data subsets. More technical details, example applications and general information on Beacon can be found on the website https://beacon.maris.nl/.

Within the context of Blue-Cloud2026, Beacon is deployed to provide access to harmonised subsets from Blue Data Infrastructures for the WorkBenches (WB) that aim to generate harmonised and validated data collections of Essential Ocean Variables (EOVs). To this end a set of monolithic Beacon nodes are set-up for relevant data collections such as the WOD, CMEMS Cora, Euro-Argo and more. These are made available on the D4Science e-infrastructure as part of the Blue-Cloud VRE, giving access to all users registered as Blue-Cloud users. 

Going one step further, the output from multiple monolithic Beacon instances are combined into one merged Beacon node for each WB. This merged node includes a structural mapping from each monolithic Beacon to the target Common Metadata Profile as defined by the WB teams. These mappings are used in the Beacon queries to retrieve and load contents ‘as-is’ from monolithic Beacon instances into the merged Beacon instances, giving a common structure for variables, units, values, quality flags, and common metadata profile fields. The structured metadata and data are supplemented by additional metadata data as available for each of the monolithic Beacon instances.

This presentation will cover an overview of the Blue-Cloud 2026 project and  developments of the merged Beacon nodes, explaining how it can practically serve as data lakes for many VRE applications and how it is extendable to other domains. By using examples from the WBs, the reduction in time and effort spent for the researchers to collect the data are highlighted.

How to cite: Krijger, T., Thijsse, P., Schaap, D., Kooyman, R., and Weerheim, P.: Blue-Cloud2026 project - Deploying Beacon data lakes for harmonizing ocean data access for Virtual Research Environments, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-10490, https://doi.org/10.5194/egusphere-egu26-10490, 2026.

11:50–12:00
|
EGU26-12357
|
ECS
|
On-site presentation
Tyler Karns, Cedric Hagen, Krutika Deshpande, Michael SanClements, Christine Laney, Benjamin Ruddell, Henry Loescher, and Tyson Swetnam

Data harmonization–the process of unifying disparate datasets into compatible formats and comparable units–is critical for global environmental research but remains prohibitively time-consuming and expensive. While many global environmental datasets could be assembled from existing available data, potentially offering transformative insight in pressing environmental issues, the exhaustive efforts to harmonize data is currently unfeasible for most scientific funding cycles. For example, cross-network studies (such as those between the U.S. National Ecological Observatory Network (NEON), the European Integrated Carbon Observation System (ICOS), and the Australian Terrestrial Ecosystem Research Network (TERN)) requires weeks-to-years of manual schema mapping, unit conversions, alignment, quality flag standardization for even a small number of data products, and more effort needed before any analyses can begin. Here, we present a large language model (LLM)-based agentic system designed to automate many of these data harmonization steps by leveraging semantic understanding of scientific metadata and documentation. This system is designed to ingest raw datasets and metadata, interpret variable semantics within scientific contexts, and generate tailored transformation pipelines. We tune this approach using a subset of previously manually harmonized environmental data from NEON, ICOS, and TERN, as well as the South African Environmental Observation Network (SAEON) and the Integrated European Long-Term Ecosystem, Critical Zone and Socio-Ecological Research Infrastructure (eLTER), as part of an effort by the Global Ecosystem Research Infrastructure (GERI) to build globally harmonized ecological drought datasets. Using these harmonized ecological drought datasets from across the globe, we test the efficacy of this LLM-based agentic system measuring accuracy, time/labor efficiencies, and data integrity preservation as compared to manual data harmonization workflows. Pressing global environmental challenges require rapid synthesis of global environmental data. By reducing data harmonization time from months to hours, these artificial intelligence (AI) tools will enable scientists to focus on analysis and modeling rather than data wrangling, ultimately accelerating research in these critical areas of global environmental science.

How to cite: Karns, T., Hagen, C., Deshpande, K., SanClements, M., Laney, C., Ruddell, B., Loescher, H., and Swetnam, T.: Using artificial intelligence to automate and expedite the harmonization of environmental data, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-12357, https://doi.org/10.5194/egusphere-egu26-12357, 2026.

12:00–12:10
|
EGU26-14863
|
On-site presentation
Douglas Schuster, Harsha Hampapura, Riley Conroy, and Brian Bockelman

Data intensive research continues to drive innovation and discovery across Earth system science (ESS).  ESS datasets maintained in science discipline specific repositories, including climate model projections, historical reanalysis products, and observational datasets, provide rich resources to support these initiatives. While significant progress has been made through the hosting of datasets by commercial cloud providers, many of these data resources– sometimes stored in non-standard formats–are primarily maintained in unconnected, domain-focused data systems designed to support the legacy “download, clean and analyze model”. This is a time consuming process with bandwidth and storage requirements that may be prohibitive, particularly for institutions with limited resources. This combination of the download, clean, and analyze model, and the use of non-standard formats, combine to create a barrier to realizing the full research potential of ESS data assets. 


This presentation will highlight the National Science Foundation National Center for Atmospheric Research (NSF NCAR) efforts to develop and deploy its Geoscience Data Exchange, Research Data Commons (GDEX, https://gdex.ucar.edu). GDEX is designed to overcome the challenges described above by: 1) curating standards based (FAIR), Analysis and AI optimized (AR/AI) versions of global and regional atmospheric reanalysis outputs, earth systems simulation outputs, and observations produced at NSF NCAR and partner organizations, 2) providing direct access to these datasets through its integration with on-premise computational resources, and 3) providing performant distributed access through its integration the Open Science Data Federation’s (OSDF, https://osg-htc.org/services/osdf).  The OSDF supports streaming data access and integration with a variety of data and compute services through its system of geographically distributed data caches, including commercial cloud hosted open datasets. GDEX’s integration with OSDF supports a wider variety of cross-domain research use cases by enabling efficient access to the spectrum of datasets hosted through OSDF’s origin access points.  Finally, GDEX is integrated with colocated data analytics services to support rapid development and iteration of data science (e.g. AL/ML) workflows, and facilitate open sharing of those workflows. To promote user adoption of these services, an example set of reference data analysis workflows have been seeded in public collaboration software repositories and documented in JupyterBook style web pages.  GDEX users are encouraged to submit their own workflow examples through this resource, amplifying the impact of their science by allowing others to more easily build upon their work.

How to cite: Schuster, D., Hampapura, H., Conroy, R., and Bockelman, B.: Breaking Data Siloes: How the NSF NCAR Geoscience Data Exchange Powers Collaboration, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-14863, https://doi.org/10.5194/egusphere-egu26-14863, 2026.

12:10–12:20
|
EGU26-11246
|
ECS
|
On-site presentation
Lise Eder Murberg, Cathrine Lund Myhre, Markus Fiebig, Nikolaos Evangeliou, Michael Schulz, Camilla Weum Stjern, Claudio Dema, and Simo Tukiainen

The Aerosol, Clouds and Trace Gases Research Infrastructure (ACTRIS) is the European Research Infrastructure Consortium (ERIC) dedicated to short-lived atmospheric constituents and clouds, supporting fundamental research and excellence in Earth system observation. ACTRIS produces high-quality, integrated long-term datasets in the field of atmospheric sciences and provides services tailored for scientific and technological use, including access to instrumented observational platforms. To enhance the availability, usability, and scientific exploitation of these datasets across disciplines and user communities, the ACTRIS Data Centre (DC) develops a range of user-oriented services, among which the ACTRIS Virtual Research Environment (VRE) plays a central role. 

The ACTRIS VRE enables efficient discovery, access, and scientific analysis of long-term observational data from ACTRIS National Facilities as well as other ground based observational sites as e.g. EMEP, EARLINET, Cloudnet and GAW. It facilitates analyses such as calculation of climatologies, long-term trend assessments, and the combination of datasets within the ACTRIS domain. The VRE is developed in collaboration between the ACTRIS DC and the ACTRIS-Norway community and is designed to serve both data producers and data users, ranging from infrastructure operators to researchers and students, across a wide range of atmospheric research applications. 

This presentation demonstrates the use of the ACTRIS VRE through selected notebook-based examples of higher-level data analysis and highlights the collaborative scientific efforts underlying its development. Data access within the VRE is based on the ACTRIS metadata REST API. ACTRIS datasets are provided in CF-compliant NetCDF format and are accessible through both streaming services (OPeNDAP) and direct HTTPS download. This approach enables flexible, reproducible, and programmatic data use, supporting interoperability with commonly used analysis tools and workflows. 

In collaboration with the ACTRIS-Norway community, the VRE includes several examples combining datasets for long time series analysis, the exploration of climatologies, and the investigation of trends. Selected examples are presented and discussed, with particular focus on the combination of FLEXPART footprint products and black carbon source apportionment data, developed within the EU project ATMO-ACCESS, together with observed equivalent black carbon measurements at several ACTRIS National Facilities. Additional higher-level analysis examples include single scattering albedo (SSA), ultrafine particle number concentrations (UFPs), and PM₁ source-related metrics from wood burning and traffic. These examples highlight how ACTRIS data can be applied to both climate-relevant and air-quality-focused research questions. 

Beyond scientific analysis, the ACTRIS VRE also serves as a platform for education and capacity building. Introductory notebooks demonstrate programmatic access to data and metadata and illustrate best practices for scientific analysis. The VRE has been used in ACTRIS training courses, ACTRIS Week, ITINERIS training workshops, and dedicated events at NILU, including collaborations with EUMETSAT, highlighting its role as a reusable training and demonstration environment. Community contributions to the example library are encouraged through an open GitHub repository, fostering collaborative development and reuse. The ACTRIS Virtual Research Environment is openly accessible at https://data.actris.eu/vre.

How to cite: Murberg, L. E., Myhre, C. L., Fiebig, M., Evangeliou, N., Schulz, M., Stjern, C. W., Dema, C., and Tukiainen, S.: ACTRIS Virtual Research Environment – Examples of use and collaboration within ACTRIS-Norway and the ACTRIS Data Centre , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-11246, https://doi.org/10.5194/egusphere-egu26-11246, 2026.

12:20–12:30
|
EGU26-3916
|
ECS
|
On-site presentation
Paolo Di Giuseppe, Simona Gennaro, Erico Perrone, Eugenio Trumpy, Samuele Agostini, Marco Procaccini, and Antonello Provenzale

ISOTOPE STUDIO is a web application provided by the Isotope Virtual Research Environment (ISOTOPE VRE) and developed in the frame of the Italian Integrated Environmental Research Infrastructures System (ITINERIS) project. The ISOTOPE VRE, powered by the D4Science Digital Research Infrastructure, adheres to Open Science and FAIR principles, supporting transparency, collaboration, and inclusivity throughout the entire research data lifecycle, and offering analytical tools and harmonized data practices.

A key feature of ISOTOPE STUDIO is the dynamic data harmonization system, designed to address the heterogeneity of geochemical and isotopic datasets. User-submitted data, which can be generally different in format and structure, are assimilated into ISOTOPE STUDIO as standardized, searchable, and freely downloadable format, while the original ones are stored without any alteration. Harmonization is applied during data presentation ensuring consistent metadata annotation and enhancing the reliability of workflows and to perform different modelling. In this light, the key tool of this process is the dedicated data submission template, which represent a first attempt in proposing an international standard for isotope data. Although no global standard currently exists, ISOTOPE STUDIO proposes this model as a starting point that can be updated and improved over time. This template is easy to fill in and specifically designed to accommodate a wide variety of large and complex geochemical and isotopic datasets. For heterogeneous datasets, the harmonization system dynamically re-organizes them according to the template structure, ensuring consistency and interoperability. By guiding users through harmonized data entry, the template promotes transparency, reusability, and inclusivity across different research domains.

Built on three-layer architecture (relational database, REST APIs, and web interface), the ISOTOPE VRE also integrates detailed metadata describing sample characteristics and analytical quality.

Beyond harmonization, ISOTOPE STUDIO provides versatile modelling tools for the analysis of diverse geochemical and isotopic datasets which include binary and ternary plots, normalized spider diagrams, and mixing models otherwise not possible without the harmonization process. ISOTOPE STUDIO accommodates a wide range of geochemical data, including major and trace elements, intensive parameters (e.g., pressure and temperature), and isotopic compositions of diverse type of matrices.

How to cite: Di Giuseppe, P., Gennaro, S., Perrone, E., Trumpy, E., Agostini, S., Procaccini, M., and Provenzale, A.: ISOTOPE STUDIO: A Virtual Research Environment for Standardized Isotope Data Management and Modelling, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3916, https://doi.org/10.5194/egusphere-egu26-3916, 2026.

Posters on site: Fri, 8 May, 10:45–12:30 | Hall X4

The posters scheduled for on-site presentation are only visible in the poster hall in Vienna. If authors uploaded their presentation files, these files are linked from the abstracts below.
Display time: Fri, 8 May, 08:30–12:30
Chairpersons: Magdalena Brus, Jacco Konijn, Chris Atherton
X4.57
|
EGU26-3749
|
ECS
Marie Jossé, Pauline Seguineau, Yvan Le Bras, Jérôme Detoc, Erwan Bodéré, Eric Lecaude, Sylvain Grellet, and Karim Ramage

The Earth System is a complex and dynamic system that encompasses the interactions between the atmosphere, oceans, land, and biosphere. Thus, Earth system science relies on heterogeneous data. It addresses key scientific questions such as land-atmosphere interactions but also studies the impacts of climate change. These studies require the integration of multiple datasets and methods, often across disciplinary boundaries. In practice, however, workflows are frequently implemented using locally installed tools, obsolete scripts and isolated computing environments, making analyses difficult to reproduce, share and reuse.

Galaxy addresses these needs by providing a Virtual Research Environment. It’s an open, comprehensive, and sustainable web platform for understanding and analyzing data. This platform was tailored to Earth science studies and it’s called Galaxy for Earth System Sciences (GESS https://earth-system.usegalaxy.eu/  or https://earth-system.usegalaxy.fr/ ).  Galaxy enables users to access data, tools and computing resources, allowing them to construct, execute and share analysis workflows without requiring programming skills. GESS extends the Galaxy framework by integrating tools, data formats and workflows commonly used in Earth system sciences, covering various scientific domains related to the study of climate, atmosphere, oceans, land surfaces and biosphere processes.

A main advantage of Galaxy lies in its workflow-based approach. Scientific analyses are processed as workflows that capture all steps, parameters and software versions, ensuring reproducibility and transparency. These workflows can be reused, adapted and shared enabling collaboration. GESS supports the execution of large datasets analysing workflows on distributed computing infrastructures removing all technical difficulties for the end user.

To facilitate the use and understanding of Galaxy, a structured collection of training materials has been developed to help users in adopting the platform and good practices in Earth system data analysis. These tutorials go from introductions to Galaxy concepts (data management, workflow construction, reproducibility) to domain-specific examples based on Earth science use cases. By combining hands-on tutorials with executable workflows, GESS provides a practical learning environment that supports both individual skill development and community-wide effort.

This presentation provides an overview of Galaxy for Earth System Sciences. We will present the platform, its representative tools and workflows, and the associated training ecosystem. Finally, we’ll show some lessons learned from deploying GESS, and perspectives for further development to support Earth system science.

How to cite: Jossé, M., Seguineau, P., Le Bras, Y., Detoc, J., Bodéré, E., Lecaude, E., Grellet, S., and Ramage, K.: Galaxy for Earth System Sciences: An Open Platform for Analysis, Sharing, and Training, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3749, https://doi.org/10.5194/egusphere-egu26-3749, 2026.

X4.58
|
EGU26-18506
|
ECS
Cyrielle Delvenne, João Vitorino, Vânia Lima, Alexander Barth, Abel Dechenne, Steven Pint, Francesco Palermo, and Julien Barde

Blue-Cloud 2026 delivers a transversal suite of Virtual Laboratories (VLabs) implemented on a common Virtual Research Environment (VRE) in D4Science to enable end-to-end, FAIR-aligned marine data science across disciplines: from coastal physics and extremes to biogeochemistry, indicators, and fisheries. Rather than isolated demonstrators, the VLabs share reproducible platform patterns that standardize how users discover data, subset and authenticate to external providers, run analytics, and publish reproducible output. 

Across VLabs, a common interaction model combines (i) gateway-based identity and access, (ii) curated “VRE folders” that distribute ready-to-run notebooks and resources, and (iii) interactive web dashboards for parameter selection (space/time/variables), pre-flight checks (coverage and overlap), and guided execution. A shared technical backbone supports transparent data acquisition and processing: connection to federated repositories and services (e.g., Copernicus Marine, EMODnet, thematic nodes), automated subsetting, and workflow steps that harmonize formats, apply quality control, manage gaps, and generate analysis-ready datasets. Several VLabs implement the same methodological “building blocks” in different contexts: variational mapping/interpolation (DIVAnd) for gridded fields, model–observation fusion, and standardized production of map/time-series outputs (NetCDF plus figures/HTML).

A second cross-cutting layer is scalable computation. While notebooks remain central for transparency and education, compute-heavy workflows increasingly migrate to shared cloud analytics services (e.g. CCP Analytics Engine) - which include delegation of compute-intensive routines to optimized backend implementations - to (a) reduce local data dependencies through remote subsetting, (b) reuse cached intermediate products, (c) support larger spatio-temporal domains, and (d) generate interactive deliverables (e.g., Plotly dashboards), alongside archival outputs. This pattern is exemplified by indicator services (MHW, OHC, TRIX, SSIv2) but is transferable to other VLabs with large datasets or reproducible executions.

Transversal lessons learned include: (1) interoperability hinges on early harmonization (units, grids, metadata, vocabularies) and “best-practice” preprocessing embedded in the VRE; (2) user trust improves when workflows expose logs, provenance, and configuration exports for audit and reproducibility; (3) robust operations require resilience to upstream outages, authentication variability, and evolving toolchains; and (4) modular design (shared UI patterns, reproducible processing kernels, and standardized outputs) accelerates expansion to new regions, variables, and communities. Collectively, the Blue-Cloud 2026 VLabs demonstrate how a unified VRE can operationalize cross-domain marine analytics, translating distributed infrastructures into consistent user experiences and reproducible digital workflows.

How to cite: Delvenne, C., Vitorino, J., Lima, V., Barth, A., Dechenne, A., Pint, S., Palermo, F., and Barde, J.: Cross-Domain Virtual Laboratories on Blue-Cloud 2026: Shared Technologies and Platform Lessons Learned, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-18506, https://doi.org/10.5194/egusphere-egu26-18506, 2026.

X4.59
|
EGU26-18647
Vincent Douet, Marie Jossé, and Cécile Pertuisot

Data Terra, a French national research infrastructure for Earth system observations, brings together five disciplinary poles—FORMATER (solid earth), AERIS (atmosphere), ODATIS (ocean), THEIA (continental surfaces), and PNDB (biodiversity)—to provide access to diverse environmental data and associated services. Several of these hubs support field and observational programs, yet researchers often face uncertainty about the tools, services, and resources available to assist them before, during, and after their field program. This lack of visibility leads to fragmented practices and repetition of efforts across domains.

To address this challenge, we initiated an inter-pole collaboration to develop a catalog of services relevant to the needs of scientists for the entire life cycle of their field program. These services include software tools, applications, data management solutions, user support, technical assistance, and training resources. The goal is to provide a coherent and easily navigable overview of the resources offered by Data Terra and its collaborators, free from disciplinary boundaries.

This effort is built on a GeoNetwork-based catalog, expanding it to accommodate cross-domain field program needs. A new dedicated thesaurus has been created to classify the diverse resources, ensure consistent tagging, and allow an eased search and findability of the resources. By structuring needs and services in a shared semantic framework, we aim to enhance discoverability, foster interoperability between poles, and better support the scientific communities conducting environmental field programs.

How to cite: Douet, V., Jossé, M., and Pertuisot, C.: Addressing the needs of Earth system field programs with a unified service catalog, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-18647, https://doi.org/10.5194/egusphere-egu26-18647, 2026.

X4.60
|
EGU26-5273
Natalia Donoho

Contemporary Earth and space science relies on complex infrastructures that connect observations, data transmission, processing systems and digital services. A critical but often under-recognized component of this system is access to radio-frequency spectrum, which enables both the acquisition of observations and the real-time exchange of data from meteorological satellites, weather radars, radiosondes, space weather sensors and other Earth observation systems. The radio-frequency spectrum is a physically limited and increasingly contested resource.

The World Meteorological Organization (WMO), a specialized agency of the United Nations with 193 Members, provides the global coordination framework for observing and data systems through the WMO Integrated Global Observing System (WIGOS), the WMO Information System (WIS) and the WMO Integrated Processing and Prediction System (WIPPS). These systems support standardized observations, global data exchange and the delivery of operational services that underpin numerical weather prediction, climate monitoring, hydrology and environmental applications worldwide.

Within this framework, the WMO Space Programme coordinates international activities related to the availability and use of satellite data and products, capacity development, space weather coordination, and cooperation on radio-frequency spectrum use. This includes engagement with scientific and regulatory communities through the Expert Team on Radio-Frequency Coordination, contributions to international technical studies, development of joint guidance (e.g., WMO–International Telecommunication Union (ITU) handbooks), and coordinated preparations for the World Radiocommunication Conference 2027 (WRC-27).

This presentation frames spectrum coordination as a core element of Earth observation data systems and highlights its role in maximizing the economic, social and environmental value of global meteorological infrastructures, including societally critical initiatives such as Early Warnings for All (EW4All).

How to cite: Donoho, N.: Radio-Frequency Spectrum for Earth Observation Data Systems: International Coordination toward WRC-27, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-5273, https://doi.org/10.5194/egusphere-egu26-5273, 2026.

X4.61
|
EGU26-10574
Paul Weerheim, Peter Thijsse, Tjerk Krijger, Robin Kooyman, and Dick Schaap

The Horizon Europe Blue-Cloud 2026 project evolved the pilot Blue-Cloud infrastructure into an ecosystem supporting FAIR and open data and analytical services. This ecosystem is envisioned as a data and analytical component for EDITO and can serve as a blueprint for thematic EOSC instances and Research Infrastructures. Within this context, a concrete plan was developed for high-performance data subsetting capabilities across the Blue-Cloud Virtual Research Environment (VRE), enabling researchers and WorkBench developers to access harmonised and validated Essential Ocean Variables (EOVs) from heterogeneous sources.

To implement this, the project adopted the fully open-source (AGPLv3) Beacon technology developed by MARIS as the core software for deploying data lakes across the VRE. Beacon provides very fast and easy access to data subsets from large multidisciplinary collections, returning a single harmonised output file regardless of the source formats. Eight monolithic Beacon instances were deployed for major Blue Data Infrastructure (BDI) collections including the World Ocean Database, ERA5, Copernicus Marine CORA, Euro-Argo, and SeaDataNet. All instances were integrated with the D4Science federated AAI and complemented by dedicated Jupyter notebooks to support reproducible workflows.

Based on extensive testing with the WorkBench teams, two integrated Beacon instances have been developed, combining data from multiple monolithic nodes through Beacon’s federation capabilities. A common metadata profile was set-up in collaboration with the WorkBenches, to support semantic harmonisation across different data sources, using the NERC Vocabulary Service, semantic tools, and unit-conversions. These merged nodes demonstrate cross-infrastructure data integration, representing a big step toward a European-scale federated data ecosystem.

This presentation will demonstrate how Beacon enables integrated workflows across infrastructures, significantly reducing effort for both data providers and researchers. While widely used in Blue-Cloud, Beacon’s design is domain-agnostic, with ongoing applications in other European and national initiatives, illustrating its potential as an innovative data lake tool for federating infrastructures.

How to cite: Weerheim, P., Thijsse, P., Krijger, T., Kooyman, R., and Schaap, D.: Beacon data lakes for federated, high-performance access to marine data in the Blue-Cloud2026 ecosystem, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-10574, https://doi.org/10.5194/egusphere-egu26-10574, 2026.

X4.62
|
EGU26-12890
Robert Huber, Kerstin Lehnert, and Jens Klump

The long-term sustainability of data repositories in the Earth and environmental sciences is increasingly influenced by evolving institutional priorities, fluctuating funding, and shifting governance frameworks. Recent events have highlighted the vulnerabilities in the continued accessibility of major US-based climate and environmental datasets, particularly in the context of political shifts, underscoring the fragility of even well-established infrastructures. Against this backdrop, we propose a multi-level, network-oriented model for strengthening the resilience of Earth and environmental data infrastructures. This model, in addition to enhancing the self-healing capabilities of individual repositories, aims to establish a common framework for cooperative stewardship.

Until recently, frameworks like Core Trust Seal and the Repository Crisis Scorecards, developed by the ESIP Repositories Resilience Project, have focused on risk assessment and mitigation and some resilience, but less on recovery. Resilience is conceptualised in our approach as the coordinated interaction of several layers that collectively enhance the “rescuability” of essential scientific data. It includes networks of mutual support, in which repositories proactively coordinate to prepare for and respond to operational crises, sharing responsibilities to reduce the risk of isolated failures; harmonized technologies and standards, common protocols, and training, enabling efficient creation of rescue-ready data packages; a structured validation and stress-testing framework to assess vulnerabilities using transparent, scenario-based criteria; and a contingency layer providing shared resources, such as temporary storage or hosting, deliberately reserved to support other repositories, and enabling distributed, peer-to-peer style replication workflows that allow data to remain accessible even when local systems cannot operate fully. A further component of this approach is the prioritisation of critical or at-risk datasets, ensuring that limited rescue capacity is directed toward collections whose loss would most severely affect research continuity and societal monitoring needs.

We illustrate this approach with examples from existing Earth and environmental science repositories, and argue that even small and mid-sized infrastructures can benefit from strategies that preserve core data and metadata, even if complete restoration of complex interfaces or ingestion pipelines might be impractical. Given the heterogeneity, scale, and long-term relevance of environmental data, developing tiered, distributed resilience strategies is essential for maintaining scientific continuity in an era of increasing systemic uncertainty.

How to cite: Huber, R., Lehnert, K., and Klump, J.: Identifying the Essentials: Distributed Resilience for Safeguarding Scientific Data in Times of Uncertainty, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-12890, https://doi.org/10.5194/egusphere-egu26-12890, 2026.

X4.63
|
EGU26-14822
Christian Pagé, Dick M.A. Schaap, Niku Kivekäs, Tjerk Krijger, Zoé Garcia, Roel Vermeulen, Zimbo Boudewijns, Harry Vereecken, Christian Poppe Terán, Pablo Serret Ituarte, Karel Klem, Lucia Mona, Päivi Haapanala, and Janne Rinne

Effective adaptation to climate change requires a comprehensive understanding of climate-related risks, including their underlying drivers—hazards, exposure, and vulnerability—and their impacts on human, economic, and natural systems. The Integrated Research Infrastructure Services for Climate Change Risks (IRISCC) project brings together a consortium of leading and complementary research infrastructures spanning natural and social sciences and covering a wide range of domains and sectors. IRISCC integrates these capabilities through Service Design Labs, which apply co-design and transdisciplinary approaches, and through Service Demonstrators that benchmark and validate cross-infrastructure services.

The IRISCC Demonstrators are pilot projects designed to showcase the added value of combining data, tools, and expertise from multiple research infrastructures to create new services that are beyond the capacity of a single infrastructure to provide. By connecting existing environmental research infrastructures with the growing demand for actionable climate-risk knowledge, IRISCC aims to accelerate the development of integrated solutions for climate change risk assessment.

This presentation will illustrate how future climate data are being incorporated across all six Demonstrators, and how these datasets are combined with other research infrastructure resources to assess climate-related risks. Finally, we will introduce the Transnational and Virtual Access opportunities offered through IRISCC access calls, highlighting how researchers and stakeholders can access Europe’s climate-risk research facilities and services to engage with the IRISCC community.

This work was supported by the IRISCC project. IRISCC is funded by the European Union (Horizon Europe) under grant agreement No 101131261.

How to cite: Pagé, C., M.A. Schaap, D., Kivekäs, N., Krijger, T., Garcia, Z., Vermeulen, R., Boudewijns, Z., Vereecken, H., Poppe Terán, C., Serret Ituarte, P., Klem, K., Mona, L., Haapanala, P., and Rinne, J.: Supporting the Needs of Climate Change Risks Assessment Using Data from Several Research Infrastructures, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-14822, https://doi.org/10.5194/egusphere-egu26-14822, 2026.

X4.64
|
EGU26-19974
Christof Lorenz, Nils Brinckmann, Claas Faber, Marc Hanisch, Roland Koppe, Ralf Kunkel, Ulrich Loup, Mihir Rambihia, Hylke van der Schaaf, and David Schäfer and the STAMPLATE-Team within the DataHub Initiative of the Research Field Earth and Environment

Environmental observations from sensor systems remain one of the most important sources of ground-truth data in Earth System Sciences. In particular, the rapid rise of AI-based methods, high-resolution modelling, and the growing demand for near-real-time reference data to support environmental decision-making, have substantially increased the need for reliable, interoperable, and AI-ready observational data.

To enable seamless integration and effective use of such data across diverse application scenarios (especially when combining observations from multiple sources), consistent data structures, well-defined interfaces, and harmonised, machine-readable metadata are essential. These requirements represent both a technical and a community-driven challenge and form a key prerequisite for ensuring the AI-readiness of sensor data.

Within the Helmholtz Research Field Earth & Environment, the DataHub initiative addresses this challenge by developing a uniform and FAIR research data infrastructure for observational time-series data across all seven contributing German Helmholtz Centres. Central to this infrastructure is the OGC SensorThings API STAMPLATE-Schema, a unified metadata schema for sensor-based observational data. The STAMPLATE-Schema serves as the semantic backbone of the DataHub ecosystem, providing a shared, machine-actionable language to describe deployments, sensors and observations. It is built upon JSON-LD and schema.org, enabling semantic interoperability, extensibility, and direct compatibility with web technologies and AI workflows.

The STAMPLATE-Schema connects and aligns the core ecosystem components, including the Sensor Management System (SMS) – which provides user-friendly management of sensor and deployment metadata - and the Earth Data Portal (EDP), which supports cataloguing, discovery, and visualisation of SensorThings API–based data. Additional integrations, such as the System for automated Quality Control (SaQC) and the time-series handling via time.io, build on this shared metadata foundation and support typical observational data workflows including data flagging, quality assessment, and downstream processing.

The STAMPLATE-Schema and the associated federated SensorThings API–based data infrastructures are currently being implemented across several major German research centres and large-scale observational projects, including the TERENO-network with its multiple observatories. Together, they are expected to provide access to more than 20 billion observations from seven research centres spanning multiple environmental research domains, including terrestrial, atmospheric, and marine systems, by the end of the year.

The DataHub and the STAMPLATE-Schema thus provide a common metadata language and framework for FAIR and AI-ready sensor data across our research field and similar federated research data infrastructures.

How to cite: Lorenz, C., Brinckmann, N., Faber, C., Hanisch, M., Koppe, R., Kunkel, R., Loup, U., Rambihia, M., van der Schaaf, H., and Schäfer, D. and the STAMPLATE-Team within the DataHub Initiative of the Research Field Earth and Environment: The STAMPLATE-Schema as a unifying metadata language for FAIR and AI-ready environmental time-series data in the DataHub ecosystem, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-19974, https://doi.org/10.5194/egusphere-egu26-19974, 2026.

X4.65
|
EGU26-21334
|
ECS
Tom Niers and Daniel Nüst

For tackling the challenge of discovering scientific articles, researchers pursue several options: they conduct general web searches, explore generic academic databases like OpenAlex or Google Scholar, use a discipline-specific portal, task an AI agent, or consider personal recommendations, e.g., via social media. These options typically rely on one of the following approaches: search terms (title, abstracts, keywords, or full text), citation chaining, editorial curation (e.g., topical journals), or known authors and affiliations. However, as the number of publications is continuously rising, there is a need for additional methods that link scientific content in novel ways and help to find relevant works. An underused approach relies on the fact that almost all research has a spatial and temporal component, i.e. the “where” and “when” of a scientific article. How do I find a scientific article by exploiting its geographic context? Currently, if any spatio-temporal metadata can be found at all, it is likely to relate to the author's affiliation or the date of publication rather than the actual content of the research. The latter information is hidden, for example as place names or coordinates, in the full text, supplementary materials, visualisations, or data, but it is not available as human- and machine-readable metadata.

In this work, we present novel tools that are integrating spatio-temporal metadata into the scholarly publishing process: the geoMetadata plugin and the OPTIMAP. The geoMetadata plugin (https://github.com/TIBHannover/geoMetadata) provides authors and journal managers with straightforward tools, such as an interactive map, to collect valid spatio-temporal article metadata during the submission process in the widely used scholarly publishing platform, Open Journal Systems (OJS). The resulting metadata is published in a machine-readable format and articles are made discoverable on maps after publication. Building on this, OPTIMAP (https://github.com/GeoinformationSystems/optimap) demonstrates how scientific articles of several journals can be found via a single map view and published in one open API.

To realise the potential of spatio-temporal metadata fully, a large amount of existing literature needs to be enriched with trustworthy spatio-temporal metadata. We sketch a new framework to support the enrichment of scientific articles in the submission process and for already existing literature. First, various technologies will be evaluated: (i) Named Entity Recognition (NER), that leverage controlled gazetteers to extract place names and temporal expressions (ii) Optical Character Recognition (OCR) to recover spatio-temporal information from maps and figures and (iii) Large Language Models (LLMs) for full-document reasoning. In a second step, the framework will be applied in both an assistance mode (e.g., during the submission process) and a fully automatic mode (back catalogue of journals, publishers, conference series, etc.) for extracting spatio-temporal metadata. The extracted metadata could undergo different curation and validation steps and ultimately become available as part of a discipline-specific knowledge graph or generic academic databases. When such data exists on a large scale, one can explore an extension for scientific search portals, or improvements for handling spatio-temporal metadata throughout the whole research data management (RDM) cycle.

How to cite: Niers, T. and Nüst, D.: Putting Science on the Map: Spatio-Temporal Metadata for Scientific Article Discovery, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-21334, https://doi.org/10.5194/egusphere-egu26-21334, 2026.

X4.66
|
EGU26-9690
Rachele Franceschini, Nydia Catalina Reyes Suarez, Alessandro Altenburger, Giuliana Rossi, and Alessandra Giorgetti

Within the framework of the ITINERIS project, funded by the NextGenerationEU Programme (2022–2025), activities focus on the downstream effects of climate and environmental change. The Downstream Virtual Research Environment (VRE) supports the use of Research Infrastructures by providing tools for data visualization, analysis, and sharing, and is hosted on the D4Science infrastructure, where dedicated marine and land domain toolboxes have been developed (Assante et al.,2019, 2021).

The marine domain toolbox leverages available datasets to create an integrated dataset of temperature, salinity, pH, and CO₂ for the Gulf of Trieste (Italy), with particular emphasis on data from the National Institute of Oceanography and Applied Geophysics (OGS) over the last ten years, used as a representative use case. After data harvesting, validation, quality control, and merging are carried out using ERDDAP Navigator, a web application that enables data visualization, quality flag assignment, analysis, and integration. The resulting integrated dataset is then employed to compute climate change indicators, including ocean acidification and ocean carbon cycle budgets.

The land domain toolbox is designed to analyse areas affected by hydrogeological hazards, with a specific focus on landslide processes. In this context, a GeoServer and a GeoNetwork have been implemented to host regional-scale maps. At the local scale, several monitoring systems—including interferometric radar, GPS, extensometers, inclinometers, a video camera, and a data coordinator—have been installed for identifing potential ground instabilities. The monitoring instruments provide geospatial data from which time-series datasets are derived. All data products are documented and downloadable, and a dedicated web application supports time-series visualization and analysis.

Acknowledgements

The work has been funded by EU - Next Generation EU Mission 4 “Education and Research” - Component 2: “From research to business” - Investment 3.1: “Fund for the realisation of an integrated system of research and innovation infrastructures” - Project IR0000032 – ITINERIS - Italian Integrated Environmental Research Infrastructures System - CUP B53C22002150006.

The authors acknowledge the Research Infrastructures participating in the ITINERIS project with their Italian nodes: ACTRIS, ANAEE, ATLaS, CeTRA, DANUBIUS, DISSCO, e-LTER, ECORD, EMPHASIS, EMSO ,EUFAR ,Euro-Argo, EuroFleets, Geoscience, IBISBA, ICOS, JERICO, LIFEWATCH, LNS, N/R Laura Bassi, SIOS, SMINO.

 

References

Assante, M., Candela, L., Castelli, D., Cirillo, R., Coro, G., Frosini, L., Lelii, L., Mangiacrapa, F., Pagano, P., Panichi, G., & Sinibaldi, F. (2019). Enacting open science by D4Science. Future Generation Computer Systems, 101, 555–563. https://doi.org/10.1016/j.future.2019.05.063https://doi.org/10.5281/ZENODO.10070443

Assante, M., Candela, L., & Pagano, P. (2021). Blue-Cloud D4.4 Blue Cloud VRE Common Facilities (Release 2). https://doi.org/10.5281/ZENODO.10070443

How to cite: Franceschini, R., Reyes Suarez, N. C., Altenburger, A., Rossi, G., and Giorgetti, A.: Land and Marine earth science applications within a Downstream Virtual Research Environment, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-9690, https://doi.org/10.5194/egusphere-egu26-9690, 2026.

X4.67
|
EGU26-11078
Koen Greuell, Geerten Hengeveld, Spiros Koulouzis, Gabriel Pelouze, Quan Pan, and Zhiming Zhao

Modern research increasingly relies on complex data workflows, digital twins, and AI-driven models. While use-case-specific virtual labs within virtual research environments (VREs) facilitate making these computing-centric techniques FAIR (Findable, Accessible, Interoperable, and Reusable), the transition from a technical demonstrator to a sustainable, community-wide service remains challenging. Development often stalls due to misaligned incentives for tool maintenance and coordination gaps between domain scientists and software engineers.

To overcome these coordination challenges, we propose a Virtual Lab Maturity Model designed to guide the co-development process. This model provides a structured framework to assess and evolve virtual labs through defined technical and functional milestones. By identifying gaps in research asset integration early, the model ensures that scientific workflows remain technically sustainable and reproducible.

We demonstrate the application of this framework within the Notebook-as-a-Virtual-Research-Environment (NaaVRE). The framework is currently deployed across ecology-focused virtual labs, co-developed by domain specialists, scientific software engineers, and the DevOps engineers at LifeWatch ERIC. One application is the LTER-LIFE project, where the maturity model steers the development of digital twins for Dutch aquatic and terrestrial ecosystems. These virtual labs facilitate collaborative research; for example, a dedicated lab integrating the LANDIS-II forest landscape model enables researchers to configure and adapt simulations for site-specific scenarios.

The Virtual Lab Maturity Model facilitates a common language across disciplines and ensures alignment with FAIR principles. This systematic approach allows for the evolution of virtual labs from initial prototypes into collaborative platforms capable of supporting large-scale research. By formalizing the path to maturity, we provide a scalable roadmap for building digital infrastructure in the environmental sciences.

How to cite: Greuell, K., Hengeveld, G., Koulouzis, S., Pelouze, G., Pan, Q., and Zhao, Z.: A Maturity Model for Facilitating Virtual Lab's Co-Development, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-11078, https://doi.org/10.5194/egusphere-egu26-11078, 2026.

X4.68
|
EGU26-11751
Antonie Haas, Birgit Heim, Annett Bartsch, Andreas Walter, Roland Koppe, Peter Konopatzky, Mareike Wieczorek, Guido Grosse, Tazio Strozzi, Sebastian Westermann, Frank Martin Seifert, and Sonja Hänzelmann

We demonstrate the latest version of the visualization of permafrost-related map products in the context of ESA CCI+ Permafrost initiatives (Phase I and II, 2018-2021, 2023-2026). Already in ESA DUE GlobPermafrost project (2016-2018), a comprehensive range of remote sensing products was produced by the project committee and visualized as viewer in the AWI O2A (Observation to Analysis & Archive) infrastructure framework: North-south transects in the northern hemisphere with trends in Landsat multispectral indices (e.g. Tasseled Cap Brightness, Greenness and Wetness, and Normalized Difference Vegetation Index (NDVI)), Arctic land cover (e.g. shrub height and vegetation composition), lake ice grounding, InSAR-based land surface deformation and rock glacier velocities. The main products were the Global Permafrost Essential Climate Variables (ECVs), which were derived from a spatially distributed permafrost model driven by Land Surface Temperature and Snow Water Equivalent products. These Permafrost ECVs include mean annual ground temperature (MAGT) and active layer thickness (ALT) at pixel level, and additionally permafrost extent and probability (PFR).

In the context of the ESA CCI+ Permafrost project, time was incorporated as a significant climate-related variable into the products. This resulted in a time series spanning over twenty years. It comprises CCI+ Permafrost Circum-Arctic model output for MAGT, from the surface down to a depth of 10 meters, as well as PFR and ALT. All data products are available at yearly resolution, as well as the calculated averages of MAGT, PFR and ALT over the time series.

To make the products publicly visible, we created WebGIS projects using WebGIS technology within the O2A (Observation to Analyses and Archive) data workflow framework at AWI. This modular, scalable and highly automated spatial data infrastructure (SDI) has been developed and operated at AWI for over a decade. It has undergone continuous improvement and provides map services for geographic information system (GIS) clients and portals. The FAIR principles were implemented to address the increasing demand for research data and metadata that is discoverable, accessible and reusable. The ESA Permafrost WebGIS products were designed using GIS software and published as Web Map Services (WMS), an internationally standardised Open Geospatial Consortium (OGC) format using GIS server technology. Additionally, visualisations of raster and vector data products have been developed that are specific to the projects and adapted to their spatial scales and resolutions.

In addition to data products derived from remote sensing, the locations of WMO GCOS ground-monitoring networks belonging to the permafrost community, which are managed by the International Permafrost Association (IPA) and form part of the Global Terrestrial Network for Permafrost (GTN-P), were incorporated as a feature layer and updated on an ongoing basis. All data products have previously undergone registration with the Digital Object Identifier (doi), and have been published in the data archives PANGAEA or ESA CEDA.

How to cite: Haas, A., Heim, B., Bartsch, A., Walter, A., Koppe, R., Konopatzky, P., Wieczorek, M., Grosse, G., Strozzi, T., Westermann, S., Seifert, F. M., and Hänzelmann, S.: ESA CCI Permafrost time series maps as Essential Climate Variable (ECV) products primarily derived from satellite measurements and visualized within the AWI O2A ( Observation to Analysis and Archive) framework, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-11751, https://doi.org/10.5194/egusphere-egu26-11751, 2026.

X4.69
|
EGU26-19457
João Vitorino, Vânia Lima, Juan Gabriel Fernández, Enrique Castrillo, Mélanie Juza, Nikos Zarokanellos, Jay Pearlman, René Garello, and Sigmund Kluckner

Coastal ocean areas are among the most complex and important marine regions of the World. Nowadays, a broad range of observations is collected in coastal ocean regions using many different systems. A panoply of numerical models is also being used to hindcast and forecast these areas. Much of this data is available to users through data aggregators and service providers, such as EMODnet or Copernicus Marine Systems.

The full potential of free access to this vast data pool is, however, frequently missed due to difficulties experienced by the users in handling the datasets, extracting the relevant information and combining different datasets in an integrated analysis. The ICOOE (Integration of Coastal Ocean Observations along Europe) VLab was developed and open to the community in the framework of the Blue Cloud 2026 project (EU Horizon Europe) to support the users in coping with these difficulties.

ICOOE proposes three complementary thematic services, providing a number of FAIR oriented tools and services that take full advantage of the Blue Cloud Virtual Research Environment and of globally accepted Ocean Best Practices and standards to explore key areas for the coastal ocean research and operational uses. The “Transboundary Transport and Connectivity” Thematic Service focus on the subinertial dynamics of coastal ocean areas. A dashboard environment allows the users to specify the geographical domain, time period and parameters of interest. The service identifies the available datasets for these choices and downloads and preprocess the datasets of interest. The user can then select a number of tools for exploration (e.g. basic statistics) or integration (e.g. pathways for transport) of the datasets. The pilot demonstrator of this thematic service accepts user domains located in the Iberian Margin global area and focuses on surface currents provided by HF radars and numerical models.

The “Extreme Events” Thematic Service explores the impacts of extreme storm events on the coastal ocean environment. Based on a user interface similar to the one describe above, this Thematic Service support users in the characterization of the conditions associated with 3 extreme storms that impacted the European coastal ocean areas, particularly their effects on the bottom sedimentary cover and on structures installed offshore.

The “Ocean Glider” Thematic Service aims to demonstrate the added value chain of glider missions from data acquisition to advanced products and visualizations for improved coastal information, integrating ocean state and variability derived from repeated glider transects Starting from input data provided by the user (raw data Slocum gliders or a OG1.0 standard dataset), the service offers a processing toolbox (based on Python Jupyter notebooks) designed to generate interpolated profiles on a regular grid along the glider monitoring line, based on the vertical and horizontal resolution of the raw data. It includes vertical sections of key parameters such as potential temperature, practical salinity, potential density, and geostrophic velocity. Additionally, an Advanced Data Viewer is used for enhanced data exploration and visualization.

This communication presents the basic capacities installed in the three thematic services implemented, providing use cases illustrating how they can support coastal ocean users.

How to cite: Vitorino, J., Lima, V., Fernández, J. G., Castrillo, E., Juza, M., Zarokanellos, N., Pearlman, J., Garello, R., and Kluckner, S.: ICOOE, a Virtual Laboratory boosting the exploration and integration of coastal ocean observations along Europe, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-19457, https://doi.org/10.5194/egusphere-egu26-19457, 2026.

X4.70
|
EGU26-22733
Linshu Hu, Yuanyuan Wang, and Zhenhong Du

Big Data and AI are reshaping Earth science discovery, yet deep-time geoscience still faces persistent barriers: scattered heterogeneous datasets, fragmented analysis tools, and limited end-to-end support for reproducible, collaborative workflows. These gaps make it difficult to harmonize data, knowledge, models, and computing across communities. We present DEEP (DDE Enabling and Empowering Platform), a one-stop online research platform under the Deep-time Digital Earth (DDE) program, providing a unified entry point (https://deep-time.org) to deep-time data, knowledge, models, and scalable computing services. DEEP is aimed at enabling and empowering global geoscientists’ collaborative innovation and discoveries by strengthening reproducibility across the research lifecycle under Open Science practices.

How to cite: Hu, L., Wang, Y., and Du, Z.: DEEP Platform: Empowering Global Geoscientists in Data-Driven Research Era, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-22733, https://doi.org/10.5194/egusphere-egu26-22733, 2026.

Posters virtual: Mon, 4 May, 14:00–18:00 | vPoster spot 1b

The posters scheduled for virtual presentation are given in a hybrid format for on-site presentation, followed by virtual discussions on Zoom. Attendees are asked to meet the authors during the scheduled presentation & discussion time for live video chats; onsite attendees are invited to visit the virtual poster sessions at the vPoster spots (equal to PICO spots). If authors uploaded their presentation files, these files are also linked from the abstracts below. The button to access the Zoom meeting appears just before the time block starts.
Discussion time: Mon, 4 May, 16:15–18:00
Display time: Mon, 4 May, 14:00–18:00
Chairperson: Filippo Accomando

EGU26-21965 | Posters virtual | VPS21

An EOSC Node Ireland Pilot Study: Bridging European and National e-Infrastructures for Reproducible Sentinel-2 Data Ingestion in Quarry Applications 

Flaithri Neff, Roberto Sabatino, Alfredo Arreba, and Jerry Sweeney
Mon, 04 May, 14:00–14:03 (CEST)   vPoster spot 1b

The establishment of the European Open Science Cloud (EOSC) places renewed emphasis on the role of national e-infrastructures in enabling standards-based, interoperable, and reusable research workflows in the EU. Within the context of Ireland’s EOSC Node, there is particular interest in demonstrating how European-scale open-data services can be digested by national research clouds, transformed into analysis-ready assets, and made available for both open research and applied industry use-cases. Earth Observation (EO) provides a strong test case, given the volume and complexity of the data involved, and its growing role in scalable environments that support operational decision-making.

This pilot project, QuarryLink, presents a Phase-1 study focused on building a reproducible EO data ingestion workflow that connects the Copernicus Data Space Ecosystem with the HEAnet Research Cloud, operating on the SURF Research Cloud platform. Through a real-world quarry case-study in the Dublin region (Ireland), the work demonstrates how EOSC-aligned principles, including auditable machine-readable workflows, can be applied from the outset of the EO research process. We will demonstrate how precise spatial boundaries can be defined and validated; how modern OAuth-based authentication mechanisms can be integrated into research cloud workflows; and how Sentinel-2 Level-2A products can be programmatically discovered, retrieved, and prepared for downstream analysis using current Copernicus services.

By executing the ingestion workflow on the HEAnet Research Cloud using open-source geospatial tooling, the pilot aims to establish an analytics-ready foundation for working with Sentinel-2 data in a reproducible research cloud environment. The resulting data products are structured to support downstream analysis, with compute resources accessed dynamically through the HEAnet Research Cloud workspace as required. Building on this foundation, Phase 2 will focus on developing time-series analyses, EO data cubes, and derived environmental indicators to support both research-driven investigation and applied monitoring scenarios in European quarry environments.

More broadly, the pilot seeks to illustrate how EOSC-aligned integration across data ingestion and compute layers can support open research practices while enabling scalable, real-world EO-enabled industrial applications, providing a practical pathway for national EOSC Nodes to translate open data into shareable analytics and societal impact.

How to cite: Neff, F., Sabatino, R., Arreba, A., and Sweeney, J.: An EOSC Node Ireland Pilot Study: Bridging European and National e-Infrastructures for Reproducible Sentinel-2 Data Ingestion in Quarry Applications, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-21965, https://doi.org/10.5194/egusphere-egu26-21965, 2026.

Please check your login data.