BG9.9 | Data science, earth observation and AI for sustainable agroecosystem monitoring and management
Data science, earth observation and AI for sustainable agroecosystem monitoring and management
Convener: Helge Aasen | Co-conveners: Sheng Wang, Stefan Erasmi, Thomas Brunschwiler, Shawn Kefauver
Orals
| Thu, 07 May, 08:30–12:30 (CEST), 14:00–15:45 (CEST)
 
Room 2.23
Posters on site
| Attendance Fri, 08 May, 10:45–12:30 (CEST) | Display Fri, 08 May, 08:30–12:30
 
Hall X1
Posters virtual
| Tue, 05 May, 14:54–15:45 (CEST)
 
vPoster spot 2, Tue, 05 May, 16:15–18:00 (CEST)
 
vPoster Discussion
Orals |
Thu, 08:30
Fri, 10:45
Tue, 14:54
Agriculture covers nearly one-third of the Earth’s land surface and plays a vital role in sustaining global food and fodder production. Yet, it is increasingly threatened by the impacts of climate change while still contributing to biodiversity of loss, soil degradation, and environmental issues. Policy frameworks and regulations at national and EU level create incentives and commitment towards more sustainable management. However, meeting these challenges and needs requires innovative approaches that enhance agricultural resilience, efficiency, and sustainability while reducing the environmental footprint and safeguarding ecosystems.
Recent advances in Earth observation, environmental research infrastructures, monitoring networks (e.g., FLUXNET), and the growing availability of in-situ measurements and open data provide unprecedented opportunities to monitor, understand, and manage agroecosystems. Coupled with advancements in data science, machine learning, and process-based modelling, these tools enable the transition from observation to actionable solutions that support climate-resilient agriculture and sustainable land management.
This session welcomes contributions that explore and integrate diverse approaches—including (but not limited to):
• Large-scale mapping strategies of agroecosystem processes and dynamics.
• Integration of multi-modal remote sensing data (spectral, thermal, high-resolution RGB from current and future satellite constellations, UAVs, or airborne campaigns) with in-situ observations and environmental monitoring networks.
• Applications of machine learning, radiative transfer modelling, and hybrid approaches.
• Foundation model development and applications within agroecosystems and agriculture
• Monitoring or modelling of soil–plant–water interactions and nutrient dynamics.
• Assessment of biotic and abiotic stresses, including water demand and evapotranspiration.
• Quantification of climate change impacts (e.g., biodiversity loss, hydrological extremes, soil degradation, ecosystem shifts) on agricultural systems and their resilience.
Contributions should be presented through a lens that aims not only to advance technical understanding, but also to demonstrate how these efforts translate into practical pathways for improving agroecosystem monitoring and management—across intensively and extensively managed crop and grassland systems—towards a more sustainable and climate-resilient future.

Orals: Thu, 7 May, 08:30–15:45 | Room 2.23

The oral presentations are given in a hybrid format supported by a Zoom meeting featuring on-site and virtual presentations. The button to access the Zoom meeting appears 15 minutes before the time block starts.
Chairpersons: Helge Aasen, Shawn Kefauver
08:30–08:35
UAV
08:35–08:45
|
EGU26-4124
|
ECS
|
Virtual presentation
Ehsan Chatraei Azizabadi and Nasem Badreldin

Understanding how canopy structure and plant nutritional status jointly regulate crop productivity remains a central challenge for precision agriculture, particularly when observations are limited to single growth stages. This study examines whether three-dimensional canopy information derived from unmanned aerial vehicle (UAV) LiDAR can be integrated with multispectral observations to improve spatial characterization of potato yield potential and nitrogen status under irrigated Prairie conditions.

Multispectral imagery and high-density UAV-LiDAR data were acquired at row closure across two growing seasons in southwestern Manitoba, Canada, spanning a controlled gradient of nitrogen availability. Rather than treating yield and nitrogen status as independent targets, we evaluated a joint learning framework in which both variables were estimated simultaneously from the same fused feature space. Multiple neural network architectures were compared under identical data partitions to isolate the effects of shared representation learning. Model interpretation was performed using attribution analysis to distinguish spectral versus structural feature dependence.

Joint learning substantially altered model behaviour. Yield estimation, which proved weak when optimized in isolation, improved markedly when trained alongside nitrogen status, indicating that shared canopy representations capture integrative growth signals not accessible through yield-only optimization. In contrast, nitrogen prediction exhibited limited or inconsistent benefit from joint learning, remaining primarily governed by chlorophyll-sensitive spectral information. Attribution results revealed that yield relied on a broader combination of spectral responses and LiDAR-derived structural descriptors, including canopy height distribution, volumetric development, and spatial heterogeneity, whereas nitrogen status remained physiologically localized within the spectral domain.

These results demonstrate that canopy structure provides complementary information for cumulative traits such as yield, even from single-date acquisitions, while offering limited leverage for physiologically proximal indicators like nitrogen concentration. More broadly, the study shows that multi-task learning does not uniformly enhance prediction accuracy but instead exposes how different agronomic traits are encoded across spectral and structural dimensions. This has direct implications for designing UAV-based decision support systems, where aligning sensing modalities, learning strategy, and crop physiology is critical for meaningful inference.

How to cite: Chatraei Azizabadi, E. and Badreldin, N.: Joint estimation of potato yield and nitrogen status using UAV-derived spectral and structural data, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-4124, https://doi.org/10.5194/egusphere-egu26-4124, 2026.

08:45–08:55
|
EGU26-2159
|
ECS
|
Virtual presentation
Florian J. Ellsäßer, Claudia Paris, Sosdito Mananze, Lourenço Manuel, and Andy Nelson

Reliable agricultural statistics support food security monitoring and evidence-based decision making. In Mozambique, official agricultural statistics are primarily derived from the Integrated Agricultural Survey (IAI), an enumerator-based field survey that provides essential contextual information on agricultural production but remains labour-intensive, costly and spatially and temporally constrained, particularly in remote rural areas. While satellite remote sensing offers complementary, wall-to-wall coverage, its spatial resolution is often insufficient to directly capture the fragmented fields, mixed and intercropping patterns, shifting cultivation and strong sub-field variability typical of smallholder farming systems. Consequently, consistent estimation of crop area and crop type derived from enumerator-based crop cover assessments remains challenging in these landscapes.

This study investigates the potential of high-resolution multispectral data acquired with Uncrewed Aerial Vehicles (UAVs) to complement field surveys by providing spatially explicit and internally consistent crop cover and crop fraction estimates at the field and sub-field scale. By resolving individual crops and dominant intercropping systems, UAV-based observations support the interpretation of farmer-reported crop cover proportions, improve consistency across enumerators, and enable post-survey correction of crop area estimates, while providing a basis for future integration with coarser-resolution satellite remote sensing. High-resolution RGB and multispectral imagery (green, red, red edge, and near-infrared; ≤5 cm ground sampling distance) was collected using a DJI Mavic 3M with RTK over 30 sampling areas of 500 × 500 m in Manica Province during the 2025 agricultural season. In parallel, a field survey recorded standardized observations of agricultural activity, including crop type (of most field and tree crops), intercropping combinations and enumerator-based estimates of fractional crop cover. UAV images were processed using a workflow tailored to heterogeneous smallholder landscapes to produce orthomosaics, digital surface models (DSMs), and vegetation indices. These products were linked to field observations through segments representing relatively homogeneous land units, enabling direct comparison between UAV-derived and survey-based crop cover estimates.

For crop classification, training polygons were delineated on RGB orthomosaics for single-crop fields (e.g. maize, beans, sorghum and cassava) and common intercropping combinations (e.g. maize–beans). Annotated mosaics were tiled and augmented and used to train convolutional neural network models (e.g. UNet++), incorporating multispectral vegetation indices and DSM-derived height information as additional input channels. Model performance was evaluated using Intersection over Union, Dice coefficients, and regression metrics for fractional cover accuracy.

A comparison framework was implemented to relate UAV-derived crop type, crop combinations and fractional cover to field survey observations while explicitly accounting for measurement uncertainty. Model II regression quantified systematic bias and proportional differences between the two methods. Initial results indicate that UAV-derived estimates provide spatially consistent crop cover information in fields with complex intercropping structures. Ongoing work focuses on refining segmentation accuracy, analysing residual discrepancies and assessing how UAV-derived crop cover information can be integrated to expand the spatial coverage and reliability of agricultural statistics in smallholder landscapes.

How to cite: Ellsäßer, F. J., Paris, C., Mananze, S., Manuel, L., and Nelson, A.: Supporting agricultural statistics through multispectral UAV-based crop cover mapping in complex smallholder farming systems in Mozambique, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-2159, https://doi.org/10.5194/egusphere-egu26-2159, 2026.

08:55–09:05
|
EGU26-20230
|
On-site presentation
Assaf Chen, Yotam Nagar, and Mery Dafny-Yelin

Mango inflorescence malformation, caused by Fusarium mangiferae, represents a major constraint to sustainable mango production worldwide, leading to severe yield losses, reduced fruit set, and long-term reinfection of orchards. Current mitigation strategies rely on labor-intensive sanitation and fungicide applications, which are costly, environmentally burdensome, and often only partially effective and insufficiently timed. There is therefore a critical need for scalable, data-driven tools that enable early, accurate, and spatially explicit detection of disease hotspots within orchards.

In this study, we develop and evaluate an automated detection framework that integrates high-resolution Earth observation data with deep learning to identify malformed mango inflorescences at the canopy and tree level. RGB imagery was collected across multiple seasons (2022–2025) using complementary sensing platforms, including UAVs and ground-based imaging, covering three commercially important cultivars (‘Keitt’, ‘Lilly’, and ‘Kent’) and multiple phenological stages. Semantic segmentation models based on an enhanced U-Net architecture with a ResNet decoder were trained to discriminate healthy and malformed inflorescences at the pixel level, enabling fine-scale disease mapping under heterogeneous field conditions.

Results from the 2025 season demonstrate that millimetric ground-based imagery (0.19–0.68 mm pixel size) enables highly accurate detection of malformation at peak flowering, with average precision exceeding 90% and F1-scores above 0.85 for the disease-sensitive ‘Keitt’ and ‘Lilly’ cultivars. Importantly, incorporating multi-year data and balancing validation datasets significantly improved model robustness and generalization. For the first time, meaningful detection performance from UAV imagery was achieved (up to 71% and 87% precision for malformed and healthy inflorescences, respectively), indicating strong potential for operational orchard-scale monitoring. Cross-cultivar evaluation further revealed partial generalization to ‘Kent’, a cultivar unseen during training, highlighting both the promise and current limits of model transferability.

Beyond detection accuracy, this work delivers key operational insights: disease recognition is highly sensitive to spatial resolution and phenological timing, and segmentation-based approaches provide a strong foundation for precision sanitation, infestation quantification, and decision support. Future work will focus on instance segmentation for whole-inflorescence detection, early-stage disease identification prior to peak bloom, improved cross-cultivar generalization, and integration with UAV- and robot-assisted sanitation workflows. Overall, the study demonstrates how AI-driven Earth observation can support sustainable agroecosystem management by directing sanitation efforts to affected orchard zones, verifying their effectiveness, and enabling disease monitoring during periods of limited field activity, ultimately reducing chemical inputs, labor demands, and pathogen spread.

How to cite: Chen, A., Nagar, Y., and Dafny-Yelin, M.: Data-Driven Detection of Mango Inflorescence Malformation Using Remote Sensing and Deep Learning for Precision Agroecosystem Management, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-20230, https://doi.org/10.5194/egusphere-egu26-20230, 2026.

09:05–09:15
|
EGU26-21351
|
ECS
|
On-site presentation
Tjark Schütte, Sascha Kontetzki, and Thomas Hänel

Vision–language models (VLMs) are increasingly used for the semantic interpretation of visual data, enabling flexible, open-vocabulary analysis of images based on natural language descriptions. These capabilities offer new opportunities for large-scale semantic mapping, particularly in domains where comprehensive labeled training data are scarce or difficult to obtain, such as agricultural and horticultural environments.

Recent research has explored the transfer of semantic information from 2D imagery into three-dimensional representations, a process often called semantic lifting. This approach is attractive for outdoor scene understanding, as training native 3D vision–language models that generalize across landscapes and management regimes remains challenging and tools for 3D data are therefore not developed as far as in the 2D domain. However, most existing studies on semantic lifting focus on indoor environments or urban outdoor scenes, while agricultural landscapes—with their distinct structural characteristics, vegetation dynamics, and management patterns—remain underexplored.

In this contribution, we investigate the applicability of open-vocabulary, VLM-based semantic lifting for large-scale 3D semantic mapping in agricultural settings. Building on insights from urban-scale benchmarks, we analyze how vision–language-driven semantic segmentation transfers to outdoor agricultural and horticultural scenes reconstructed from multi-view UAV imagery. Our results highlight both the potential of these models to generate spatially consistent semantic representations and their limitations, which are strongly dependent on land cover type and semantic classes.

We discuss how such preliminary semantic 3D representations can support large-scale agroecosystem mapping and serve as an initial layer for downstream applications, including spatial analysis and the deployment of agricultural robotic systems. The findings provide guidance on the opportunities and current constraints of foundation-model-based semantic mapping for sustainable agricultural monitoring.

How to cite: Schütte, T., Kontetzki, S., and Hänel, T.: Potentials and Limitations of Vision-Language Models for Large-Scale 3D Semantic Mapping in Agricultural Environments, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-21351, https://doi.org/10.5194/egusphere-egu26-21351, 2026.

09:15–09:25
|
EGU26-17039
|
ECS
|
On-site presentation
Mingxia Dong, Frédéric Baret, Marie Weiss, Yanfeng Ding, Linyuan Li, and Shouyang Liu

Unmanned aerial vehicle (UAV) remote sensing plays an increasingly important role in crop phenotyping and precision agriculture. As GAI (Green Area Index) is one of the main crop characteristics desired for crop management or plant selection, several retrieval algorithms have been proposed from multispectral observations. The inputs of these retrieval algorithms could be the spectral radiance, or the spectral reflectance that is based either on the calibration over a reference panel (PanelCal) or on the use of a Downwelling Light Sensor (DLS) aboard the UAV. However, variability in illumination conditions during UAV flights introduces pronounced artifacts, leading to unreliable inputs of the retrieval algorithms that degrade the accuracy of GAI estimates.

In this study, we propose a Spectral Normalization for Illumination Invariant Calibration (SNIC) method that aims at eliminating the artefacts introduced in the retrieval algorithms when the illumination conditions are changing during the flight of a multispectral camera aboard a UAV.

A Digital Plant Phenotyping Platform (D3P) coupled with a three-dimensional radiative transfer model was employed to simulate wheat canopy reflectance and GAI across a wide range of illumination scenarios. The simulated datasets provide a physically consistent benchmark for evaluating the robustness of different radiometric calibration strategies under varying illumination conditions during the UAV flight. Our model driven GAI retrieval approach is based on XGBoost (eXtreme Gradient Boosting) regression. Four calibration strategies—Radiance, PanelCal, DLS, and SNIC—were then systematically assessed in terms of GAI retrieval performance.

This in-silico experiment demonstrates that SNIC substantially minimizes the sensitivity of GAI retrieval to illumination variability, whereas PanelCal exhibits pronounced degradation under fluctuating illumination conditions. Validation against 4,000 in situ measurements collected under diverse weather conditions further confirms that SNIC is resistant to changes in illumination conditions. The radiance-based method performs also nicely. Conversely, the reflectance-based methods suffer from severe limitations under such conditions (PanelCal) or from the artefacts introduced by the DLS sensor.

How to cite: Dong, M., Baret, F., Weiss, M., Ding, Y., Li, L., and Liu, S.: SNIC: A spectral normalization resistant to illumination conditions for robust estimates of GAI from UAV multispectral measurements , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-17039, https://doi.org/10.5194/egusphere-egu26-17039, 2026.

09:25–09:35
|
EGU26-10454
|
ECS
|
On-site presentation
Marcel El Hajj, Kasper Johansen, Fabio Camargo, Oliver Lopez Valencia, Yu-Hsuan Tu, Victor Angulo Morales, Omar A. López Camargo, Samer K. Al Mashaharawi, Dominique Courault, and Matthew F. McCabe

Monitoring crop conditions is crucial for effective crop management and provides valuable insights into soil-plant-atmosphere interactions. While some studies have used unmanned aerial vehicle (UAV)-based light detection and ranging (LiDAR) data for mapping plant area index (PAI) in orchards, LiDAR-based time-series analysis to assess PAI variations with phenology throughout the growing season represents a significant gap in knowledge. Tracking PAI dynamics across phenological stages reflects canopy development and leaf expansion, which are directly linked to yield formation. Furthermore, the optimal spatial resolution for mapping biophysical variables of tree crops from LiDAR point clouds is yet to be determined. This study aimed to demonstrate the potential of UAV-derived LiDAR time-series to monitor the PAI and tree vertical profiles at high spatial resolution throughout the growing season of a cherry orchard located in southeastern France. A time series of 14 point cloud acquisitions with a density of 3300 points/m² was collected between February and December 2022, with at least one acquisition per month, covering all phenological stages of the cherry orchard. Field measurements were collected on May 30, and October 6, to measure the PAI at twilight using an LAI-2200C Plant Canopy Analyzer (LI-COR Biosciences, Lincoln, NE, USA), with 248 trees sampled. A voxel-based method was applied on the LiDAR point cloud data to create a three-dimensional grid within which PAI was estimated for each voxel. The results showed that a voxel size of at least 70 cm is required to retrieve reliable PAI estimates, while a voxel size of 100 cm produced the most accurate PAI estimates (RMSE = 0.5 m2.m-2, bias = 0.07, R2 = 0.59), when assessed against in-situ PAI measurements. The temporal variation of canopy PAI illustrated the progression of the phenological stages, including flowering, leaf development, ripening and senescence, and the response of the canopy to drought stress (reduction in PAI due to leaf rolling) during the summer. The maps of PAI successfully described the variations in leaf canopy density for different cherry varieties and allowed assessment of the vertical PAI profile at the individual tree level. The LiDAR-derived PAI maps and vertical profiles were able to detect trees exhibiting poor leaf development, which is an important health indicator for effective crop management in orchard settings. Future work should focus on applying UAV-derived observations to optimize crop models to enhancing decision-making tools for effective orchard management.

How to cite: El Hajj, M., Johansen, K., Camargo, F., Lopez Valencia, O., Tu, Y.-H., Angulo Morales, V., López Camargo, O. A., Al Mashaharawi, S. K., Courault, D., and McCabe, M. F.: High-Resolution Plant Area Index Estimation in Cherry Orchards Using UAV LiDAR for Agroecosystem Monitoring, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-10454, https://doi.org/10.5194/egusphere-egu26-10454, 2026.

09:35–09:45
|
EGU26-10845
|
ECS
|
Virtual presentation
Tianyi Jia, Magdalena Smigaj, Gert Kootstra, and Lammert Kooistra

Pest and pathogen pressure in potato cultivation is increasingly affecting the potato quality and yield. The Netherlands, as the largest seed potato producer around the world, is particularly threatened by blackleg disease and potato virus Y (PVY). Uncrewed aerial vehicle (UAV)-based imaging combined with machine- and deep-learning methods have shown clear potential for potato disease identification, offering advantages over conventional human inspections, which are labor-intensive, expertise-demanding, and often subjective. Most existing studies focused on RGB data and pixel-level classification, producing maps that have limited practical value for targeted removal of infected plants. Earlier work demonstrated the potential of plant-level disease detection approaches. For example, Jia[1] employed hyperspectral data (specifically the first three principal component analysis (PCA) bands) with a YOLOv5s model to distinguish the blackleg- and PVY-infected plants from healthy ones, yielding average mAP@.50 scores of 0.85 for blackleg detection and 0.82 for PVY detection. Gibson-Poole[2] applied object-based image analysis (OBIA) to detect blackleg disease with RGB imagery, achieving a total accuracy of 87%. The findings suggest that multi-modal data (combining hyperspectral and RGB imagery) hold strong potential for plant-level disease detection. We aim to identify the most informative features derived from hyperspectral data and to investigate their integration with RGB data to enhance potato disease detection performance.

We proposed early fusion (E), where data were concatenated channel-wise before network input, and middle fusion (M) architectures, where features were extracted separately within a two-branch network and then merged at an intermediate stage, to integrate hyperspectral features and RGB imagery for potato disease detection. To reduce hyperspectral dimensionality, two feature sets were extracted: (i) the first three PCA bands, and (ii) 10 vegetation indices (VIs) selected from 64 candidates using variance inflation factor analysis to mitigate multicollinearity. Consequently, four models were developed and evaluated: E-PCA-RGB, E-VI-RGB, M-PCA-RGB, and M-VI-RGB. Unlike previous studies that focused on a single disease, our models detected blackleg-infected, PVY-infected, and healthy plants simultaneously. E-VI-RGB achieved the highest mAP@.50 value of 86.65±1.53, followed by M-VI-RGB (85.74±1.75). E-PCA-RGB and M-PCA-RGB yielded mAP@.50 scores of 83.00±2.52 and 83.11±2.20, respectively. These results demonstrate that combining hyperspectral features with RGB imagery improves detection performance compared with single-modality approaches (RGB 83.21±1.31, PCA 79.71±1.30, VIs 85.31±2.11). Our findings highlight the potential of multimodal fusion for potato disease detection in practice. The methods could enable automated systems not only to identify infected plants but also to support timely removal with machinery, mitigating the spread of disease in potato fields. The generalizability of our approach will be further tested and analyzed in future work.

References

[1] Jia, T., Smigaj, M., Kootstra, G. and Kooistra, L., 2024. Detection of Diseased Potato Plants with UAV Hyperspectral Imagery. In 2024 14th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS) (pp. 1-5). IEEE.

[2] Gibson-Poole, S., Humphris, S., Toth, I. and Hamilton, A., 2017. Identification of the onset of disease within a potato crop using a UAV equipped with un-modified and modified commercial off-the-shelf digital cameras. Advances in Animal Biosciences8(2), pp.812-816.

How to cite: Jia, T., Smigaj, M., Kootstra, G., and Kooistra, L.: Fusing UAV-based hyperspectral and RGB imagery for potato plant disease detection, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-10845, https://doi.org/10.5194/egusphere-egu26-10845, 2026.

09:45–09:55
|
EGU26-22080
|
ECS
|
Virtual presentation
Sushma Katari and Sami Khanal

Monitoring soybean growth provides critical insights for farmers, enabling them to closely track crop development and implement proactive management practices that ultimately enhance yields. Inefficient management and excessive chemical use not only reduce efficiency but also result in significant environmental consequences, including water contamination and increased greenhouse gas (GHG) emissions. These environmental impacts degrade soil health, disrupt weather patterns, and contribute to issues such as soil nutrient depletion and irregular precipitation, all of which have direct, adverse effects on agricultural productivity. Integrating various sensor data, such as satellite and small Unmanned Aerial System (sUAS) data, with machine learning (ML) offers a pathway to precise soybean growth monitoring. This pathway enables farmers to make data-driven decisions that reduce the need for field scouting while improving resource efficiency. Though recent studies have begun to explore field-level, precise growth monitoring using sUAS and satellite imagery, in-depth research on their integration strategies is necessary to develop practical, cost-effective methods for accurately estimating soybean phenological stages. In this study, a comprehensive analysis of soybean growth is conducted across early vegetative to reproductive stages using ML and multi-sensor methods. The selected soybean fields are located at three Ohio State Agricultural Research Stations, which are geographically dispersed across Ohio, USA. Using fixed-wing Wingtra sUAS, high-resolution optical images of soybean fields were collected from 2023 to 2025. To determine whether simple machine learning or complex deep learning methods perform better, multiple combinations of these models with sUAS and satellite are trained, and their performance is evaluated. Best model performance was observed with the Vision Transformer (ViT) model on sUAS images, which detected soybean growth stages with an average Root Mean Squared Error (RMSE) of 0.7. Poor performance was observed with the Random Forest model on open-source Sentinel-2 images, with an RMSE of 3.1. Upon closer investigation of good-performing and poor-performing growth stages through sUAS and satellite, it was observed that early growth stages performed really well only with sUAS data (RMSE<1), while for later reproductive stages (>R2), satellite performed relatively well with RMSE<1. This indicates that using sUAS during the early growth phase and satellites during the late growth phase can be a promising approach for the future.  This strategy enables farmers to make data-driven decisions that optimize growth monitoring and resource use, reduce waste, and minimize environmental impacts.

How to cite: Katari, S. and Khanal, S.: AI-Driven Insights from Multimodal Data for Optimized Soybean Growth Monitoring , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-22080, https://doi.org/10.5194/egusphere-egu26-22080, 2026.

09:55–10:05
|
EGU26-2432
|
solicited
|
On-site presentation
Tarin Paz-Kagan, Lior Fine, Adi Edri, Avraham Atanelov, Nechama Z. Brickner, and Offer Rozenstein

Accurate, timely, and spatially consistent crop maps are a cornerstone of sustainable agricultural management, climate adaptation, and evidence-based policy. Yet national-scale crop mapping remains challenging in heterogeneous agroecosystems due to fragmented field structures, dynamic land use, and the contrasting spatiotemporal characteristics of annual crops and perennial orchards. Addressing these pressures requires scalable, data-driven approaches that translate advances in Earth observation (EO), data science, and modelling into actionable tools for climate-resilient agroecosystem management. Here we present an integrated, end-to-end crop-mapping framework that synthesizes complementary methodological advances to enable robust, operational monitoring of agricultural systems across space, time, and crop types. Using Israel as a national-scale case study representative of heterogeneous, intensively and extensively managed agroecosystems, the framework links fine-scale field structure, crop phenology, and multi-year dynamics to support decision-making under climatic variability. First, national cadastral parcel layers are refined into agronomically homogeneous field units using deep learning-based semantic segmentation (U-Net, DeepLabV3, and SegFormer) and foundation models (SAM), addressing a critical limitation of registry-based agricultural databases. A U-Net architecture outperformed SegFormer and DeepLabV3, achieving a mean Intersection-over-Union (IoU) of 0.76 with balanced precision-recall. At the national scale, polygon correctness improved from 75.16% to 86.37%, resulting in tens of thousands of fields segmented into homogeneous management units. This step substantially improves geometric consistency and the reliability of downstream crop classification and agroecosystem analysis. Second, a hierarchical, multi-sensor classification strategy integrates Sentinel-1 SAR and Sentinel-2 multispectral time series with phenological metrics and expert-driven feature selection to map agricultural land use and dynamically classify annual field crops across multiple growing seasons. XGBoost achieved the highest land-cover accuracy (OA = 0.909), driven primarily by vegetation, moisture, and chlorophyll-sensitive indices (NDVI, MCARI, NDMI, PGHI). For detailed row-crop mapping, deep learning models outperformed traditional machine learning (TabM OA = 0.861). Multi-satellite fusion ensured robustness and transferability, yielding an average leave-one-year-out accuracy of 0.833. This integration captures crop rotations, seasonal shifts, and climate-driven phenological gradients, enabling consistent multi-year monitoring in dryland and Mediterranean environments. Third, perennial orchard systems, often underrepresented in national crop statistics, are mapped using a multimodal fusion approach that combines very-high-resolution (VHR) aerial imagery with multi-temporal Sentinel-1/2 data. Deep learning architectures jointly exploit fine-scale spatial structure and phenological dynamics, achieving the highest performance across all evaluation settings (same-year OA = 0.890 ± 0.009; cross-year OA = 0.881 ± 0.014), with particularly strong gains for early-stage and sparsely vegetated orchards. Overall, the framework is designed for scalability, interpretability, and operational deployment, demonstrating how multi-modal remote sensing, deep learning, and hierarchical modelling can bridge scientific innovation and real-world agricultural decision-making under climate change.

How to cite: Paz-Kagan, T., Fine, L., Edri, A., Atanelov, A., Brickner, N. Z., and Rozenstein, O.: An Integrated Multi-Sensor Framework for National-Scale Crop Mapping and Climate-Resilient Agricultural Monitoring, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-2432, https://doi.org/10.5194/egusphere-egu26-2432, 2026.

10:05–10:15
|
EGU26-5216
|
ECS
|
On-site presentation
Mengshuai Wang, Linjia Yao, Bin Chen, Zhiming Xia, Bo Pang, Zhijian He, Yingnan Wei, Genghong Wu, Qiang Yu, and Gang Zhao

Plant density at the wheat emergence stage is a fundamental structural attribute of agroecosystems, exerting strong control on early competition, resource use efficiency, and yield formation. While UAV-based counting approaches have been widely explored for visually distinct crops such as maize and cotton, accurate and scalable estimation of wheat seedlings remains challenging due to their small size, high spatial density, and spectral similarity to soil and residue backgrounds. Moreover, existing RGB-based UAV and ground imaging approaches face an inherent trade-off between spatial resolution, spectral sensitivity, and operational efficiency.

Here, we propose MS²‑Net (Multi-altitude, Multispectral Seedling Network), a high-throughput Earth-observation framework that integrates multi-altitude multispectral UAV observations with deep learning to enable robust estimation of wheat plant density at the emergence stage. Field experiments were conducted across three major wheat-growing regions in China (Henan, Hebei, and Shaanxi), covering approximately 1,500 plots spanning large variability in sowing density, genotype, and early growth conditions. Multispectral UAV imagery (blue, green, red, red-edge, and near-infrared) was acquired at four flight altitudes (12, 15, 20, and 40 m), enabling systematic evaluation of the trade-off between spatial detail and mapping efficiency. High-resolution smartphone images collected synchronously at plot level provided accurate reference plant counts for model training and validation.

All UAV data were radiometrically calibrated to surface reflectance and used to derive conventional vegetation indices (NDVI, GNDVI, NDRE, OSAVI, and a red-edge chlorophyll index) for spectral interpretability. Wheat plant density was estimated using a deep regression framework built on an EfficientNet-B6 backbone and enhanced with spectral-aware adaptation, spatial attention, and scale-consistent feature learning, allowing MS²-Net to exploit both multispectral information and multi-scale spatial patterns. Across five-fold cross-validation over regions and flight altitudes, MS²-Net achieved robust density estimation (R² = 0.86, RMSE = 37.20 plants m⁻², averaged across sites and flight altitudes), with red-edge and near-infrared bands contributing substantially to model stability across observation scales.

Results demonstrate that multi-altitude multispectral UAV observations provide a practical balance between spatial resolution, spectral sensitivity, and survey efficiency, outperforming both ground-based imaging and RGB-only UAV approaches for early wheat stand assessment. By enabling rapid, field-scale and spectrally informed plant density mapping, MS²-Net provides a scalable pathway for operational agroecosystem monitoring, high-throughput phenotyping, and precision crop management under real field conditions.

How to cite: Wang, M., Yao, L., Chen, B., Xia, Z., Pang, B., He, Z., Wei, Y., Wu, G., Yu, Q., and Zhao, G.: MS²-Net: A deep learning framework for high-throughput assessment of wheat emergence-stage plant density using multi-altitude multispectral UAV imagery, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-5216, https://doi.org/10.5194/egusphere-egu26-5216, 2026.

Coffee break
Chairpersons: Stefan Erasmi, Helge Aasen
Monitoring
10:45–10:55
|
EGU26-10857
|
ECS
|
solicited
|
On-site presentation
Johannes Löw, Christopher Conrad, Steven Hill, Tobias Ullmann, and Insa Otte

This study presents a novel framework for monitoring crop phenology using Sentinel-1 (S1) time series data. The proposed approach establishes explicit links between landscape-scale vegetation patterns and field-level phenological developments to address three key objectives: evaluating the agreement between field and landscape phenological signals, identifying dominant phenological tendencies at the field scale, and detecting phenological outliers. Two core indicators were developed—Average Agreement (AVA), which quantifies the correspondence between individual field dynamics and overall landscape development, and Dominance of Tendency (DoT), which characterizes whether fields are phenologically ahead or behind the broader landscape trend, while assessing the consistency of these tendencies across multiple S1 features and orbits.

Environmental descriptors, including soil organic carbon, topographic wetness index, and elevation, were found to shape the spatial and temporal variability of both indicators. Although no single dominant driver was identified, random forest analyses achieved an R2 of 0.8, highlighting the complex, multifactorial nature of phenological processes. By integrating growing degree day (GDD) information and S1 time series metrics, the framework reduces reliance on extensive in situ measurements while enabling robust field-scale characterization of phenological progression.

Results show that combining outlier detection with cross-scale comparisons provides valuable insights into typical and atypical crop behavior, supporting assessments of climate vulnerability, resilience, and adaptive management strategies. The flexibility of the method allows seamless application across various S1 features, acquisition geometries, and crop types, demonstrating strong potential for upscaling to regional or national monitoring as well as for broader studies of phenological dynamics.

This work establishes a data-driven pathway toward advanced agricultural management by linking temporal S1 observations with crop performance indicators, thereby enhancing informed decision-making in a sector increasingly challenged by climate change.



How to cite: Löw, J., Conrad, C., Hill, S., Ullmann, T., and Otte, I.: Phenological Alignment and Divergence in Agricultural Systems Derived from Sentinel-1, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-10857, https://doi.org/10.5194/egusphere-egu26-10857, 2026.

10:55–11:05
|
EGU26-18032
|
ECS
|
On-site presentation
Thomas Lauber, Mehmet Ozgur Turkoglu, Dominik Senti, and Helge Aasen

Accurate and spatially explicit information on the condition of agricultural landscapes is essential for monitoring developments and advancing management practices in agricultural systems. A real-world example is the need for reliable, high-resolution crop maps as needed by the Swiss national greenhouse gas inventory. The inventory currently relies on crop distribution data aggregated at the municipality level, limiting the ability to capture spatial differences. Future inventories aim to transition toward fully spatially explicit representations, requiring robust, high-resolution crop type maps. 

 In this work, we generate national-scale distribution maps for 36 crop types and 6 grassland classes across Switzerland using satellite image time series. We employ an attention-based deep learning model trained on the “Swiss Crops” dataset, which is annotated from farmer declarations and contains 9.3M polygons (8.7M ha) covering the years 2019-2024. To ensure robustness under real-world conditions, we train models on temperature-informed samples in a cross-year setting and evaluate their ability to generalize to unseen years. This explicitly addresses inter-annual variability in crop development driven by climatic fluctuations and management practices. Preliminary results show F1-scores above 0.85 for most majority crops and above 0.7 for most minority crops. Meadow intensity classes (intensive vs. extensive) can be reliably distinguished (F1 ≈ 0.80 and 0.65), while performance in distinguishing pasture intensity remains limited. 

 Our results demonstrate that the proposed approach generalizes well throughout Switzerland and remains stable under substantial year-to-year variation, making it suitable for operational applications. All maps and labels will be made freely available, forming one of the largest national-scale, multi-year satellite benchmark datasets for crop classification and segmentation. The produced crop and grassland maps provide a key building block for spatially explicit greenhouse gas accounting and other agro-environmental assessments. 

How to cite: Lauber, T., Turkoglu, M. O., Senti, D., and Aasen, H.: National-Scale, Multi-Year Crop and Grassland Mapping from Satellite Image Time Series in Switzerland , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-18032, https://doi.org/10.5194/egusphere-egu26-18032, 2026.

11:05–11:15
|
EGU26-17433
|
On-site presentation
Selene Ledain, Anina Gilgen, and Helge Aasen

Soil erosion by water is a widespread environmental problem with significant impacts on soil fertility, crop productivity, and ecosystem sustainability. In Switzerland, up to 10% of arable land is at a higher erosion risk [1], primarily due to unadapted farming methods, and could benefit from control measures. The combination of susceptible terrains with disturbances from reworking the soil or low soil coverage can exacerbate erosion risk. Reliable, spatially explicit information on soil cover dynamics is therefore essential for identifying erosion-prone areas and supporting sustainable land management.

A commonly used framework to assess erosion risk in agricultural systems is the Revised Universal Soil Loss Equation (RUSLE), in which the crop cover and management factor (C-factor) represents the protective effect of vegetation and farming practices against soil loss. The C-factor varies over time as a function of crop growth, harvest, residue management, and bare-soil periods [2], making its accurate estimation challenging at large spatial scales. For arable land in Switzerland, the annual average erosion indicator computed within the national agri-environmental monitoring programme [3] is based on generic crop calendars and assumed field management practices, leading to inaccuracies in the representation of crop cover and on-field management.

The advent of satellite data provides large-scale access to frequent and high-resolution observations (e.g. 5 days and up to 10 for Sentinel-2) that enable continuous monitoring of land surface conditions. Fractional cover can be retrieved at pixel level using spectral mixture analysis (SMA), which decomposes the mixed satellite signal into proportions of soil, photosynthetic vegetation, and non-photosynthetic vegetation [4].

In this research, we present an automated framework for producing high-resolution, temporally consistent fractional cover maps over Switzerland. We first establish SMA-based regression models by constructing a representative dataset of pure photosynthetic vegetation, non-photosynthetic vegetation, and soil spectra from Sentinel-2 imagery, capturing the diversity of crop types, management practices, and soil conditions across the country. Synthetic spectral mixtures with known proportions of each cover type are created and used as a training dataset for neural network models. The trained models are then applied to Sentinel-2 data to generate nationwide fractional cover time series. We further post-process the outputs to reduce cloud contamination, enforce temporal consistency, and aggregate predictions to regular timestamps and administrative units.

The resulting fractional cover product provides updated, spatially explicit inputs for C-factor estimation within the RUSLE framework, enabling up-to-date assessment of erosion risk at national scale. Beyond soil erosion modelling, the proposed approach offers a product for large-scale monitoring of vegetation and soil dynamics in agricultural landscapes.

[1] V. Prasuhn et al., “Der Agrarumweltindikator Erosionsrisiko,kulturspezifische C-Faktoren sowie eine Karte des aktuellen Erosionsrisikos der Schweiz,” tech.rep., Agroscope, 2023.

[2] P. I. A. Kinnell, “Event soil loss, runoff and the Universal Soil Loss Equation family of models:A review,” Journal of Hydrology, 2010.

[3] A. Gilgen et al., “New approach to calculateagri-environmental indicators using greenhouse gas emissions in Switzerland as an example”, Pre-print. 10.2139/ssrn.5640831, 2025.

[4] F. Lobert et al., “Unveiling year-round cropland cover by soil-specific spectral unmixing of Landsatand Sentinel-2 time series,” Remote Sensing of Environment, 2025.

How to cite: Ledain, S., Gilgen, A., and Aasen, H.: Large-Scale, High-Resolution Fractional Cover Mapping from Sentinel-2 for Agri-Environmental Monitoring, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-17433, https://doi.org/10.5194/egusphere-egu26-17433, 2026.

11:15–11:25
|
EGU26-12956
|
ECS
|
On-site presentation
Arina Machine, Moya Burns, and Heiko Balzter

Agroforestry, the integration of trees on productive agricultural land (Mosquera-Losada et al., 2018), can be identified through remote sensing methods by the combination of land cover and tree cover maps. Previous work has classified agroforestry as agricultural land with greater than 5% tree cover (Lawson et al., 2025; Zomer et al., 2016).

However, there exist several regional, European, and global land cover and tree cover products that could be suitable for agroforestry identification, but these products vary in resolution, data inputs, and methodology of production. Our work benchmarked the performance of four land cover maps(Büttne et al., 2021; Karvatte et al., 2021; Schultz et al., 2025; UKCEH, 2022) and nine tree cover maps(Brandt et al., 2024; Copernicus, 2023, 2025; Hunter et al., 2025; Lang et al., 2023; Tolan et al., 2024; Weinstein et al., 2020) that were capable of mapping trees outside of woodlands.  We evaluated the datasets’ ability to identify agroforestry on 25 agroforestry sites across the United Kingdom, including a mix of silvoarable and silvopastoral systems, as well as planting ages, densities, and species, as well as nearby agricultural (no trees) and woodland (no agriculture) control fields.

We found that a number of datasets used in previous studies underperformed when distinguishing agroforestry from control fields as well as the previously utilised pixel-based approaches being unsuitable to identify agroforestry fields as a whole. Datasets with coarse resolutions (>10m) often confused proximal small woodlands for trees within agricultural fields. Many datasets struggled to map trees in silvoarable systems, likely due to their linear arrangement differing from that of other trees outside of woodlands.

In addition, the majority of datasets were unable to identify agroforestry sites planted since 2000, suggesting a 20-year lag in identification. The only tree identification method capable of identifying young sites was the fine-tuning of the DeepForest tree detection model (Weinstein et al., 2020) with local data, suggesting a need for tree cover datasets that are capable of identifying seedlings and saplings.

We used the best-performing datasets (balanced accuracy 83-87%) to create a map of agroforestry, with quality flags to signify agreement between datasets. We conclude that there is 517,300 ha (3% of UAA) of area under agroforestry in the United Kingdom. Our map could have further use cases for calculating the uptake of agroforestry, as well as its benefits to people and nature, such as carbon storage, biodiversity impacts, farm income, and health of crops and livestock. We also conclude the need for model training on agroforestry trees, to both identify young trees and those with complex planting arrangements.

How to cite: Machine, A., Burns, M., and Balzter, H.: Evaluating land and tree cover datasets for the identification of agroforestry in temperate Europe, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-12956, https://doi.org/10.5194/egusphere-egu26-12956, 2026.

11:25–11:35
|
EGU26-13294
|
ECS
|
On-site presentation
Lu Xu, Hong Zhang, Huadong Guo, Mingyang Song, Lijun Zuo, and Yazhe Xie

Global food security is facing increasing pressure from population growth and climate change. Rice, the staple food for over half of the world’s population, is essential to nutritional supply and social stability, especially in developing regions. Highly dependent on water resources, rice production is highly sensitive to climate change and extreme events, and its changes affect global carbon emissions backward. Therefore, timely, accurate, and high-resolution global rice distribution information is indispensable for agricultural management and hunger elimination to achieve Sustainable Development Goal 2 (Zero Hunger) of the United Nations.

Overcoming the uncertainty of optical remote sensing data acquisition with all-weather, all-time, and stable revisits, Synthetic aperture radar (SAR) provides a highly promising solution for the timely acquisition of global rice cultivation. Deep learning models provide strong interpretability of rice scattering patterns and superior generalization capabilities among different agricultural scenarios. Combining the most advanced computation technology with the big remote sensing data, we developed a Time-Series-to-Vision Rice Classification Model (T2VRCM). Instead of learning from the remote sensing image stacks, T2VRCM learns the intrinsic feature variations during rice growth with standardized 2D visual representations, so that problems such as irregular sampling and insufficient modalities can be avoided. A novel global rice dataset, GlobalRice20, was achieved, providing comprehensive and consistent global rice cultivation data in 2015 and 2024 at 20 m resolution for the first time. An overall accuracy of 92.33% was achieved with rigorous validation against over 160,000 reference samples, enabling promising spatiotemporal analysis over the first decade of the SDGs.

Our team has been dedicated to large-scale rice mapping using intelligent computation methods, advancing from national and regional to global scales. Starting with the classic U-Net model, we produced the first 20 m interannual rice maps for Southeast Asia (2019–2021) using time-series Sentinel-1 data. We then proposed an optical–SAR fusion strategy using stacked random forests to generate EARice10, a 10 m rice distribution product in 2023 with comprehensive coverage of four East Asia countries. To overcome global spatial heterogeneity, we further upgraded the framework to the Explainable Mamba U-Net (XM-UNet). With strong generalization, the model provides a physically explainable interpretation of multi-temporal Sentinel-1 SAR data and possesses robust generalization capabilities in countries with diverse cultivation patterns. In addition, we constructed the world's first plot-level rice dataset, Plot-Rice v1.0, with the SAM-2 model and Sentinel-1/2 features. Covering various climatic zones, the dataset supports multiple mainstream deep learning models and demonstrates strong transferability among cross-regional and cross-annual scenarios.

As a result of the achievements outlined above, we provided the 20 m global rice product in 2023 to support the assessments of the UN’s SDG2 indicators, as detailed in the Reports on Big Earth Data in Support of the Sustainable Development Goals in 2024 and 2025. This study reveals the up-to-date progress of our research to address advanced intelligent models in global rice mapping. Meanwhile, we are developing an in-season rice mapping methodology to enhance the timeliness of rice distribution information, in comparison to the current mainstream post-season rice mapping methods.

How to cite: Xu, L., Zhang, H., Guo, H., Song, M., Zuo, L., and Xie, Y.: Global Rice Mapping Driven by Intelligent Models and Big Earth Data Supporting Progress Assessment of SDG 2, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-13294, https://doi.org/10.5194/egusphere-egu26-13294, 2026.

11:35–11:45
|
EGU26-15427
|
ECS
|
On-site presentation
Ratneel Deo, Patrick Filippi, and Thomas Bishop

Accurate mapping of weed infestations in fallow farmlands is critical for supporting sustainable weed management and reducing unnecessary chemical inputs. Previous work has demonstrated the effectiveness of convolutional encoder–decoder architectures, such as U-Net, for weed segmentation from satellite imagery; however, these approaches are typically constrained by sensor-specific training, limited cross-site generalisation, and sensitivity to variations in spectral and spatial resolution.  In this study, we investigate the application of the ANYSAT foundation model [1] for sensor-agnostic weed segmentation across heterogeneous fallow agricultural farms across Australia. Building on top of an established U-Net-based workflow, we evaluate whether a foundation model pretrained on diverse Earth observation data can improve robustness and transferability across multiple satellite sensors without explicit sensor-dependent retraining. Multi-spectral satellite imagery from different platforms is used to fine-tune ANYSAT for semantic segmentation of weed presence in fallow paddocks, with human-curated and U-Net-refined weed masks serving as supervisory labels.  We design a systematic evaluation strategy based on leave-one-farm and leave-one-region validation to test model robustness under spatial and spectral variability. Rather than focusing on achieved performance, this work emphasises assessing feasibility, identifying the strengths and limitations of foundation-model-based segmentation for this task, and outlining key considerations for operational deployment in data-sparse agricultural settings. By framing weed detection as a sensor-agnostic problem, this study provides a structured pathway for testing foundation models in agroecosystem monitoring. It contributes to understanding how emerging Earth observation foundation models can be adapted for practical agricultural applications. 

[1Astruc, Guillaume, et al. "AnySat: One Earth Observation Model for Many Resolutions, Scales, and Modalities." Proceedings of the Computer Vision and Pattern Recognition Conference. 2025. 

How to cite: Deo, R., Filippi, P., and Bishop, T.: Weed Segmentation in Fallow Farmlands Using the ANYSAT Foundation Model , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-15427, https://doi.org/10.5194/egusphere-egu26-15427, 2026.

11:45–11:55
|
EGU26-16567
|
On-site presentation
Claudia Paris, Mehmet Furkan Celik, Stefano Maurogiovanni, Rocco Sedona, Gabriele Cavallaro, Ruben Cartuyvels, and Valerio Marsocci

Recent advances in Earth Observation (EO) data and multimodal Geo-Foundation Models have sharply improved the ability to generate accurate crop-type maps by leveraging rich spatio-temporal representations. These models are inherently scalable across diverse and heterogeneous agricultural landscapes, thus exhibiting strong generalisation. However, timely and high-quality reference data remain a major bottleneck for reliable agricultural mapping and monitoring. Agricultural landscapes are highly dynamic, with frequent crop rotations that require seasonal or annual updates. In addition, European agriculture is increasingly affected by weather extremes (e.g., droughts, hail, and storms), which are expected to intensify in both magnitude and frequency.

Traditional approaches rely on time-consuming and costly manual annotations or field surveys, which are difficult to sustain on a continuous basis and at large spatial scales (e.g., continental monitoring). In this context, geo- and time-tagged field photos represent a promising complementary data source. Each field photo can be linked to satellite image time series acquired over the same location up to the acquisition date. Compared to conventional in-situ surveys based on manual annotations, the combined use of satellite image time series and field photos provides a richer semantic representation of agricultural areas. While satellite image time series capture the temporal dynamics of crop development, field photos offer ground-level information at high resolution on crop condition, phenological stage, and management practices.

Despite their potential, the operational use of field photos in agricultural monitoring remains limited, in part due to challenges in translating heterogeneous images into structured information. Recent advances in Vision–Language Models (VLMs) have unlocked substantial progress in the automatic interpretation and semantic extraction of information from raw field photos. By aligning visual features with semantic concepts expressed in natural language, VLMs provide a powerful mechanism for mapping unstructured field photos to standardised crop-type labels.

This study investigates the potential of combining satellite image time series and geo-tagged field photos to expand, update, and complement existing reference datasets to support continuous large-scale agricultural monitoring. Preliminary results of mapping seven crop types (i.e., maize, wheat, rape, sugarbeet, oat, barley, and sunflower) in Europe indicate that, even in a zero-shot setting and when using simple prompts, the CLIP VLM can correctly identify crop types from field photos when a distinct phenological stage is visible. Incorporating phenological information derived from the temporal patterns of satellite image time series is therefore crucial, as it allows for the filtering of irrelevant images (e.g., post-harvest fields) and the selection of samples for which reliable classification is feasible. Furthermore, when consistency of label predictions obtained independently from field photos (using CLIP) and from Sentinel-1 and Sentinel-2 time series (using a simple Random Forest classifier) is used as a data reliability strategy, highly accurate classification performance across all considered crop types can be obtained. Overall, these findings highlight the strong potential of jointly exploiting satellite image time series and geo-tagged field photos for the efficient and reliable preparation of crop-type reference datasets.

How to cite: Paris, C., Celik, M. F., Maurogiovanni, S., Sedona, R., Cavallaro, G., Cartuyvels, R., and Marsocci, V.: From Satellite Data and Geo-tagged Field Photos to Reliable  Agricultural Reference Data, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-16567, https://doi.org/10.5194/egusphere-egu26-16567, 2026.

11:55–12:05
|
EGU26-14192
|
ECS
|
On-site presentation
Yunze Zang, Xuehong Chen, Miaogen Shen, Wei Yang, Anton Vrieling, Claudia Paris, Bingwen Qiu, Lang Xia, Shangrong Wu, and Jin Chen

As a globally vital oilseed, rapeseed necessitates precise in-season mapping to support field management. Furthermore, the accurate retrieval of peak flowering dates is critical for yield estimation, as this phenological stage directly correlates with crop productivity. While state-of-the-art methods have advanced both crop mapping and phenology retrieval, existing approaches predominantly address these tasks in isolation, thereby neglecting their inherent phenological interdependence. Specifically, in-season mapping is often confounded by early-season phenological heterogeneity across fields and regions, whereas flowering retrieval typically relies on the prerequisite of an accurate a priori crop map. To address these limitations, this study introduces a multi-task Transformer-based framework that simultaneously maps rapeseed and retrieves peak flowering dates using Sentinel-1 and Sentinel-2 time series. Reliable training samples were automatically generated via phenology-based rules applied to cloud-free time series. To enhance the robustness against cloud contamination, a data augmentation strategy was introduced that masks Sentinel-2 observations using real-cloud temporal masks to simulate realistic data unavailability. The proposed architecture integrates a dual-task framework with adaptive loss weighting to dynamically balance learning gradients between tasks. Extensive validation across 13 European countries, covering a flowering gradient of up to two months, demonstrates that the proposed method achieves an F1-score of 0.89 for rapeseed mapping four months prior to harvest, and a Mean Absolute Error (MAE) of 6 days for peak flowering retrieval. These results substantially outperform both conventional sequential single-task baselines and specialized state-of-the-art methods. Furthermore, independent validation against phenological records from the German Weather Service (DWD) further confirm the robustness of the proposed method in flowering retrieval. To provide interpretable insights into the model's effectiveness, we analyzed Transformer attention maps and band importance. These visualizations substantiate that the multi-task model effectively extracts task-shared spectral-temporal features, offering a clear and interpretable basis for its enhanced generalization. Overall, this study presents a practical, scalable solution for integrated, large-scale rapeseed monitoring, demonstrating a robust framework that is adaptable to the integrated monitoring of other crops.

How to cite: Zang, Y., Chen, X., Shen, M., Yang, W., Vrieling, A., Paris, C., Qiu, B., Xia, L., Wu, S., and Chen, J.: Integrated In-season Rapeseed Mapping and Flowering Retrieval: A Multi-task Transformer Framework Using Sentinel-1 and Sentinel-2, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-14192, https://doi.org/10.5194/egusphere-egu26-14192, 2026.

12:05–12:15
|
EGU26-15565
|
ECS
|
On-site presentation
Tin Satriawan and Xiangzhong Luo

Indonesia is currently the third major producers of coffee in the world, with approximately 20% of its exports destined for European Union (EU) markets. Recent policy developments, such as the EU Deforestation Regulation (EU-DR), impose stringent traceability requirements on coffee imports to EU, requiring spatially explicit linkage between coffee products and their production area. Consequently, there is an urgent need for accurate coffee mapping to support compliance, monitoring, and benchmarking. In this study, we map coffee distribution across Indonesia using multi-temporal imageries, by integrating optical imageries from Harmonized Landsat Sentinel-2 (HLS) dataset, radar imageries from Sentinel-1, and auxiliary environmental data (i.e., topography and distance to human settlement) using Random Forest classification in Google Earth Engine. Specifically, we aim to (1) produce annual maps of monoculture and coffee agroforestry distribution from 2016 to 2024 and (2) assess coffee-related land use changes over this period. The resulting maps will provide critical information on regional coffee distribution to support sustainable land management and future carbon modelling.

How to cite: Satriawan, T. and Luo, X.: Mapping the distribution of coffee agroecosystems in Indonesia from 2016 to 2024, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-15565, https://doi.org/10.5194/egusphere-egu26-15565, 2026.

12:15–12:25
|
EGU26-6052
|
ECS
|
On-site presentation
Abhasha Joshi, Patrick Filippi, and Thomas Bishop

Detection of biotic and abiotic stress at the field level is an important crop monitoring task with varied applications, including the delineation of management zones and targeted management interventions. Satellite remote sensing provides extensive spatial and temporal coverage for this purpose; however, automated stress detection is constrained by a lack of field-level ground-truth data required to train supervised models. This study develops and evaluates an unsupervised anomaly-detection workflow for identifying biotic and abiotic crop stress using openly available Sentinel-2 satellite imagery, without relying on ground-truth labels. The study develops an Isolation Forest–based method incorporating within-season time-series data that include spectral bands and vegetation indices. Unlike traditional statistical anomaly-detection methods, this model-based technique accommodates multivariate inputs and does not require the assumption of a normal data distribution. Multiple feature configurations were assessed, including visible, red-edge, near-infrared, and shortwave infrared bands, their combinations, and selected vegetation indices. Anomaly scores were computed across multiple image acquisition dates, and only regions consistently identified as anomalous over time were retained as persistent stress signals. The framework was evaluated across three different stress scenariosfrost damage, Septoria disease incidence, and nitrogen deficiency. Results show that the proposed approach successfully detected stress patterns across all sites, achieving accuracies of up to 83%. In addition, the experiments identified key spectral features that were particularly informative for detecting each specific type of stress. This workflow offers a scalable and operationally feasible option for crop stress detection in agricultural systems where ground-truth data are limited. 

How to cite: Joshi, A., Filippi, P., and Bishop, T.: Unsupervised detection of biotic and abiotic crop stress using Sentinel-2 time series and Isolation Forest , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-6052, https://doi.org/10.5194/egusphere-egu26-6052, 2026.

12:25–12:30
Lunch break
Chairpersons: Sheng Wang, Stefan Erasmi
Modeling
14:00–14:10
|
EGU26-13185
|
ECS
|
On-site presentation
Aolin Jia, Helge Aasen, and Nina Buchmann

Reliable quantification of agricultural water use and greenhouse gas (GHG) emissions is essential for understanding and mitigating the environmental footprint of food production. However, it remains challenging due to the limited spatial representativeness of in-situ measurements and the strong influence of vegetation dynamics, management practices, and weather variability. Eddy-covariance (EC) observations provide direct and high-frequency measurements of evapotranspiration (ET) and GHG fluxes, but their footprint is inherently local, constraining their applicability for regional and national assessments. Satellite remote sensing (RS) offers spatially continuous information on vegetation status and land cover, yet its effective integration with flux observations for process-relevant upscaling remains limited.

In this contribution, we provide first insights from a synthesis of recent field-scale literature, comprising over 300 ET studies and more than 400 GHG-focused studies, to assess how remote sensing information has been incorporated into ET and GHG flux modelling. Our review indicates a clear divergence in modelling development trajectories across flux types. Earlier ET studies were largely dominated by physically based formulations, such as Penman–Monteith and surface energy balance models. Over the past five years, ET modelling has shifted toward data-driven and machine-learning approaches, enabling the integration of a broader range of satellite-derived predictors, including vegetation indices and shortwave infrared (SWIR)-based indicators related to soil moisture conditions. Net ecosystem exchange (NEE) exhibits a similar transition from process-based to data-driven modelling frameworks, reflecting improved data availability and methodological flexibility.

In contrast, modelling of other GHG fluxes, particularly CH4 and N2O, remains largely confined to process-based approaches, with DNDC and DayCent being the most widely applied models. This persistence primarily reflects the limited availability of long-term, high-quality ground-based GHG flux measurements. Moreover, RS-based information on soil moisture and temperature, vegetation status, and land-use or management practices offers potential to better inform and constrain GHG flux estimates in agricultural systems. These findings highlight a persistent gap between the availability of spatially explicit satellite information and its current use in GHG flux modelling, pointing to substantial opportunities for improved integration of remote sensing and in-situ flux observations in future upscaling efforts.

How to cite: Jia, A., Aasen, H., and Buchmann, N.: Integrating Eddy-Covariance and Satellite Data to Upscale ET and GHG Fluxes across Swiss Agricultural Systems, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-13185, https://doi.org/10.5194/egusphere-egu26-13185, 2026.

14:10–14:20
|
EGU26-2168
|
On-site presentation
Darren Drewry, James Cross, Sana Shirazi, Srishti Gaur, Kanishka Mallick, Guler Aslan-Sungur, and Andy Vanloocke

Machine learning methods provide a powerful basis for developing flexible, non-parametric models of complex phenomena and have demonstrated strong predictive capabilities across many areas of the physical sciences generally and the earth sciences specifically. While machine learning methods have been demonstrated to be flexible predictive tools capable of integrating diverse data streams, they present significant challenges in terms of interpretability and generalizability. This is especially true in the context of ecohydrological or biophysical systems, where the objective is often to develop a better understanding of the underlying system rather than exclusively improve predictive performance. There is a growing recognition that interpretability, physical consistency, and data complexity are key challenges in the successful adoption of machine learning methodologies. Here we evaluate the application of machine learning methods to produce models for land-atmosphere water vapor exchange across a set of diverse agricultural systems. Specific focus is placed on the use of environmental and proximal sensing information to develop simple and effective models of evapotranspiration using both machine learning and hybrid modeling approaches that leverage the advantages of machine learning and biophysical simulation. Emphasis is placed on parsimonious model development and interpretability of model performance.

How to cite: Drewry, D., Cross, J., Shirazi, S., Gaur, S., Mallick, K., Aslan-Sungur, G., and Vanloocke, A.: Machine Learning and Proximal Sensing for Predicting Evapotranspiration of Agricultural Systems, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-2168, https://doi.org/10.5194/egusphere-egu26-2168, 2026.

14:20–14:30
|
EGU26-6102
|
ECS
|
On-site presentation
Qu Zhou, Zhixian Lin, Kaiyu Guan, Sheng Wang, and Xiangzhong Luo

The Mekong Delta contributes approximately 7–10% of the global rice trade, produced by about 1.5 million small-scale farmers. Understanding how field size affects rice yield is critical for advancing the sustainability and resilience of smallholder farming systems in the Mekong Delta. However, rice yield variability across field sizes remains poorly understood, due to the complexity of rice cropping systems and the lack of accurate field boundaries for smallholder farms in this region. In this study, we delineated field boundaries across the Mekong Delta using 3-m PlanetScope imagery and analyzed rice yield patterns across field sizes using near-infrared reflectance of vegetation (NIRv) as a yield proxy, derived from 10-m Sentinel-2 observations spanning 2019–2025. To delineate smallholder farms, we fine-tuned the Segment Anything Model (SAM), which generated field boundaries with an accuracy of 78%, an F1-score of 0.54, and a Matthew’s correlation coefficient (MCC) of 0.40. Using these boundaries, we assessed rice yield variability across field sizes and found that yield increased with field size (r = 0.30, p < 0.001). This relationship remained stable across years, indicating that smaller farms consistently experienced lower yields. This study contributes to understanding rice yield patterns within smallholder farming systems for better management practices in the Mekong Delta.

How to cite: Zhou, Q., Lin, Z., Guan, K., Wang, S., and Luo, X.: Assessing the dependence of paddy rice yield on field size across the Mekong Delta, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-6102, https://doi.org/10.5194/egusphere-egu26-6102, 2026.

14:30–14:40
|
EGU26-12127
|
solicited
|
Highlight
|
On-site presentation
Ji Chen, Shuo Liu, and Siyi Sun

The stabilization of soil organic carbon (SOC) in forest ecosystems is crucial for mitigating climate change. However, the interaction of mycorrhizal associations with environmental factors to influence SOC fractions globally remains poorly understood. Here, we synthesize 2,784 observations from 234 peer-reviewed studies to examine global patterns of particulate (POC) and mineral-associated organic carbon (MAOC) in forests dominated by arbuscular (AM) versus ectomycorrhizal (ECM) forests. Our results reveal that ECM forests possess 24% higher POC content and exhibit greater sensitivity to climate warming than AM forests. In contrast, AM forests sustain higher MAOC content, which shows less variability across climate gradients. Linear mixed-effects models indicate distinct responses of POC and MAOC to the interactive effects of mycorrhizal type and environmental drivers. Notably, POC content in ECM forests increases with stand age. While young AM forests contain higher levels of both POC and MAOC, middle-aged and mature ECM forests surpass AM forests in POC, with no significant difference in MAOC. Using existing data, we project global changes in these SOC fractions and propose a mycorrhiza-informed framework for forest carbon sequestration. Our findings underscore the pivotal role of mycorrhiza-environment interactions in SOC partitioning and stabilization, offering critical insights for refining global carbon models and guiding climate-smart forest management.

How to cite: Chen, J., Liu, S., and Sun, S.: Mycorrhiza-mediated distribution of particulate and mineral-associated organic carbon across global forests, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-12127, https://doi.org/10.5194/egusphere-egu26-12127, 2026.

14:40–14:50
|
EGU26-11089
|
ECS
|
On-site presentation
Zizhang Zhao, Jinwei Dong, Jilin Yang, Luo Liu, Nanshan You, Xiangming Xiao, and Geli Zhang

Information on the rice agricultural system, including not only planting area but also phenology and cropping intensity, is critical for advancing our understanding of food and water security, methane emissions, carbon and nitrogen cycles, and avian influenza transmission. However, existing efforts primarily focus on mapping planting area and lack a comprehensive picture of the rice agricultural system. To address this gap, we propose a remote sensing-based comprehensive framework for mapping the rice agricultural system in China: First, we identified valid growth cycles of crop by using 30-m Sentinel-2 and Landsat fused data; Second, we applied a well-established phenology-based algorithm to map rice planting areas, by extracting the flooding and rapid growth signals in the transplanting and rapid growth temporal windows; Third, the rice-specific phenology phases (i.e., transplanting, tillering, heading, and mature) were identified using a phenology extraction method tailored to the physiological characteristics of rice; Finally, rice cropping intensity was determined based on detailed phenological phases and planting area data. Due to the accurate identification of crop cycles and pixel-level temporal windows at the national scale, the generated rice planting area maps exhibit a high accuracy across China, with both overall accuracy and F1 scores exceeding 0.8, based on validation with over 13,000 field samples. Improvements in the extraction method have addressed the lag in phenology detection caused by rice's flooded environment, leading to more accurate phenological information. As a result, the phenological data shows reliable accuracy (R2 of 0.6–0.8 and RMSE of 8–15 days), facilitated by the mutual enhancement of rice planting area and phenology mapping. The resultant rice phenology and cropping intensity maps are the first of their kind with 30 m resolution. Together, the resultant rice planting area, rice phenology, and cropping intensity maps provide, for the first time, a comprehensive picture of China's rice agricultural system, better supporting multiple targets related to Sustainable Development Goals.

How to cite: Zhao, Z., Dong, J., Yang, J., Liu, L., You, N., Xiao, X., and Zhang, G.: From rice planting area mapping to rice agricultural system mapping: A holistic remote sensing framework for understanding China's complex rice systems, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-11089, https://doi.org/10.5194/egusphere-egu26-11089, 2026.

14:50–15:00
|
EGU26-19714
|
ECS
|
On-site presentation
Rodolfo Ceriani, Monica Pepe, Mirco Boschetti, and Francesco Fava

Over the last decade, innovations in satellite remote sensing (RS) and data science have widened the scope and relevance of agricultural monitoring and management applications at farm and territorial scales. Recently launched and upcoming hyperspectral satellite missions (e.g., ASI-PRISMA, DLR-ENMAP, Planet-Tanager, ESA-CHIME, Kuva-Hyperfield) provide high spectral resolution (< 10 nm) across the 400-2500 nm range, opening new frontiers for assessing biophysical and biochemical functional traits of agroecosystems, while advancements in machine learning (ML) and artificial intelligence (AI) allow the efficient exploitation of the information content of high-dimensionality spectral datasets.   
Here we summarize the results and lessons learned from three experiments in different European agricultural systems (croplands and grasslands), analysing how the synergy between hyperspectral imaging spectroscopy (field and satellite), ML, and foundation models could support adaptive agroecosystem management through the retrieval of vegetation and soil properties related to the nutrient cycle. The three case studies are:(1) Assessment of biomass and nutritional quality of Alpine pastures: Gaussian Process Regression (GPR) models were calibrated on 250 vegetation samples and field spectra collected in 2024 and 2025 from semi-natural pastures in Valtournanche (Aosta, Italy) and Val Camonica (Brescia, Italy). PRISMA-derived maps for biomass and leaf-level protein, and fiber content showed good accuracy against in-situ data (LAI [-] R2 = 0.71, RMSE = 0.89; Biomass [g · m-2] R2 = 0.67, RMSE = 178.71; DM [%] R2 = 0.65, RMSE = 2.70; CP [%] R2 = 0.58, RMSE = 0.52; ADF [%] R2 = 0.45, RMSE = 2.42; NDF [%] R2 = 0.50, RMSE = 0.61), allowing mapping of pasture metabolizable energy in support of grazing management.

(2) Monitoring of nutritional status of paddy fields: GPR models were developed on 200 vegetation samples and field spectra collected in 2024 and 2025 in several fields located in the Ferrara region (Italy). These models, demonstrated on PRISMA and EnMAP time-series, effectively monitored crop development across a temporally and spatially independent test set (LAI [-] R2 = 0.83, RMSE = 0.30; Fresh Biomass [g · m-2] R2 = 0.72, RMSE = 627.67; LCC [μg · cm-2] R2 = 0.58, RMSE = 3.40; LNC [μg · cm-2] R2 = 0.34, RMSE = 22.51; CNC [g · m-2] R2 = 0.56, RMSE = 0.77).

(3) Retrieval of Soil Organic Carbon (SOC) and soil Nitrogen (N) on arable lands: A transformer-based, sensor-agnostic deep learning architecture was fine-tuned on open global spectral libraries. When applied to EMIT and Tanager-1 imagery over the Po Plain (Italy) and Northern Netherlands regions, the model yielded high accuracy (SOC [%] R2 = 0.61, MAE = 0.37; N [%] R2 = 0.68, MAE = 0.12) against 289 independent field observations.

Our findings demonstrate that satellite hyperspectral spectroscopy can complement operational multi-spectral missions, adding key information about agroecosystems nutritional status.  Furthermore, we show that the use of pre-trained ML and AI models on global spectral libraries and field reflectance data allows accurate retrieval even in the absence of ground truth acquisition synchronous to the satellite overpass, offering a potential scalable solution for agroecosystems management and monitoring at landscape scale.

How to cite: Ceriani, R., Pepe, M., Boschetti, M., and Fava, F.: Advancing spaceborne remote sensing for agroecosystem adaptive management, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-19714, https://doi.org/10.5194/egusphere-egu26-19714, 2026.

15:00–15:10
|
EGU26-19004
|
ECS
|
On-site presentation
Coline Girod, Pierre Barré, Rémy Fieuzal, Eric Ceschia, Nicolas Chemidlin Prevost-Bouré, Pierre-Alain Marron, Lionel Ranjard, and Anne Hermand

Soil organic carbon (SOC) plays a key role in climate regulation and soil functioning. Although SOC stock and stability result from complex interactions between environmental, biological, and anthropogenic factors, the hierarchy of these drivers is strongly scale-dependent. At large spatial scales, climatic forcing often dominates SOC patterns, potentially masking the effects of land management. At local scales, where climatic variability is reduced, the relative influence of agricultural practices compared to landscape heterogeneity (e.g., topography and soil properties) remains poorly quantified, notably due to the scarcity of datasets combining soil properties and high-resolution management data. Clarifying this balance is essential for designing effective climate mitigation strategies in agricultural systems.

We investigated the drivers of SOC stock and stability within a 3,000 km² heterogeneous agricultural territory in Burgundy (France). The territory spans a diverse landscape, transitioning from western limestone plateaus to agricultural plains in the east. SOC measurements (0–20 cm) from 147 cropland sites were combined with 18 explanatory variables derived from in situ measurements, field survey, or satellite data and describing topography, climate, soil physico-chemical properties, vegetation dynamics, and contrasting agricultural management (diverse crop rotations, residue management, the use of cover crops, and organic amendments). SOC stable and active fractions were quantified using Rock-Eval® thermal analysis coupled with the PARTYsoc learning model. The SOC stocks averaged 41.7 ± 13.9 tC.ha-1, with the active (Ca​) and stable (Cs) stocks representing 19.9 ± 8.3 tC.ha-1 and 21.8 ± 6.1 tC.ha-1, respectively. 

Random Forest models were used to capture non-linear relationships between SOC variables and their drivers, and SHAP (SHapley Additive exPlanations) values were applied to quantify the relative importance and direction of individual drivers. Model performance reached a coefficient of determination (R2) of 0.41 for SOC stocks, and 0.50 and 0.26 for Ca​ and Cs stocks respectively. The lower R2 for Cs​ likely reflects missing explanatory variables related to historical land use or specific soil mineralogy.

SHAP analysis revealed that even at local scales (a few km), soil properties and climate remain the dominant drivers of SOC stock and stability in this study. Nevertheless, management-related factors, such as crop residue management and number of vegetation days during the intercrop periods, exert a stronger influence on SOC stock than topographic variables. Patterns differ among pools: active carbon is mainly influenced by CaCO₃, temperature, and precipitation, whereas clay content dominates the stable carbon fraction.

Our results demonstrate that while soil and climate largely control SOC stocks at local scales in the context of a highly heterogeneous terrain, agricultural management can meaningfully influence SOC dynamics and stability, highlighting opportunities for targeted strategies to enhance soil carbon sequestration.

How to cite: Girod, C., Barré, P., Fieuzal, R., Ceschia, E., Chemidlin Prevost-Bouré, N., Marron, P.-A., Ranjard, L., and Hermand, A.: Understanding Local Drivers of Soil Organic Carbon Stocks and Stability Using SHAP Analysis in an Agricultural Territory of Eastern France, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-19004, https://doi.org/10.5194/egusphere-egu26-19004, 2026.

15:10–15:20
|
EGU26-16104
|
ECS
|
On-site presentation
Fatima Benitez, Carlos Mena, and Anne Gobin

In the Galapagos Archipelago, agricultural abandonment and biological invasions act as synergistic forces, creating "novel ecosystems" that threaten both endemic biodiversity and local food security. While historical land cover changes are well-documented, the mechanisms determining when and where productive land is lost to invasion remain obscured by complex interactions between climatic legacies and anthropogenic pressure. This study presents a unified spatiotemporal framework to assess the susceptibility of island agroecosystems to three critical transitions: agricultural abandonment, invasive species expansion, and invasive conversion (die-back).

We integrated dense Sentinel-1 SAR time-series (2018–2024) with high-resolution climatic variables (CHIRPS/TerraClimate) across the agricultural highlands (≈25,000 ha). Our hybrid workflow fuses satellite event-dating (Vertex AI + PELT) with epidemiological Case-Crossover designs to pinpoint specific climatic triggers, followed by Bayesian Spatial Modeling (R-INLA) and Random Forest classifiers to map landscape susceptibility.

Our results reveal distinct spatiotemporal fingerprints with direct implications for farm management. Temporally, agricultural abandonment is triggered by persistent drought stress (longer dry spells); spatially, risk is critically clustered in Silvopasture and Mixed Forest zones, identifying these productive assets as "stepping stones" to total land abandonment. Conversely, invasive expansion exhibits a "Rainfall Paradox": it is primed by short-term wetting pulses, while spatially, the models detect a process of "densification" within existing patches rather than purely frontier expansion. Finally, invasive retreat (die-back) is linked to extreme wet spikes and heat interaction, and is spatially confined to high-elevation climatic niches, supporting the "Environmental Filtering" hypothesis where native resilience limits invasive establishment.

By coupling AI-driven event detection with physics-aware spatial statistics, we demonstrate that invasive dynamics are pulsed by climate "windows of opportunity." The resulting risk maps provide a dual-purpose baseline for the Galapagos National Park and the Ministry of Agriculture, facilitating targeted interventions to protect native ecosystems while reinforcing the resilience of farming systems against climatic shocks.

How to cite: Benitez, F., Mena, C., and Gobin, A.: AI-assisted event-dating of invasive transitions in Galápagos agroecosystems: Disentangling climate triggers and landscape susceptibility using Satellite Imagery and Bayesian–ML, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-16104, https://doi.org/10.5194/egusphere-egu26-16104, 2026.

15:20–15:30
|
EGU26-21855
|
ECS
|
Virtual presentation
Anasuya Barik, Paresh Shirsath, and Pramod Aggarwal

Climate stressors pose increasing risks to major oilseed cropping systems of groundnut, mustard, and soybean across South Asia, a region where these crops are critical for food security, livelihoods, and edible oil supply. Existing assessments often rely on aggregated climate indicators or generalized crop responses. This limits their usefulness for identifying suitable crop-specific adaptation options. This study advances current understanding of climate–oilseed interactions by adopting a physiology-based, adaptation-oriented framework that explicitly links biologically relevant climate stressors to the suitability of adaptation interventions under current and future climates.

We quantify multiple heat and rainfall-related stressors using crop-specific physiological thresholds and analyse their intensity and frequency under historical conditions and CMIP6-based future scenarios for the 2050s and 2080s. The analysis distinguishes between stress exposure over the full crop (cardinal) cycle and stress occurring during sensitive phenological windows, particularly the reproductive and pollination phases. Stressor projections are then linked to adaptation options using a logical, expert-reviewed heuristic framework that evaluates the feasibility and expected effectiveness of genetic, management, structural, irrigation, and financial interventions under increasing climate stress.

Our results show that the intensity of all heat-related stressors and the crop water deficit index is projected to increase substantially across oilseed-growing regions in South Asia. Rainfall-related stressors display mixed and spatially heterogeneous responses, reflecting uncertainty and regional differences in future precipitation patterns. Importantly, heat stress during the full crop cycle and during critical reproductive phases exhibits contrasting behaviour. Critical-phase heat stress is projected to increase mainly in frequency, implying more frequent exposure to damaging conditions during short, sensitive windows, whereas full-cycle heat stress is projected to majorly intensify in the future.

These changes have direct implications for adaptation planning. Genetic interventions and financial risk-transfer mechanisms emerge as the most consistently robust options across crops, regions, and emission pathways. In contrast, structural measures, nutrient management, and irrigation-based interventions progressively lose effectiveness as future heat and moisture stresses exceed the thresholds these measures can realistically buffer, with outcomes strongly dependent on emission trajectories.

By mapping transitions in stressor regimes and adaptation suitability, this study provides a first-order, spatially explicit basis for climate-smart adaptation planning in South Asian oilseed systems. The findings highlight the need for innovation focused on protecting critical phenological processes under future climate change.

How to cite: Barik, A., Shirsath, P., and Aggarwal, P.: Climate Stresses and Adaptation Pathways under Changing Climate for South Asian Oilseed Systems, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-21855, https://doi.org/10.5194/egusphere-egu26-21855, 2026.

15:30–15:40
|
EGU26-8451
|
On-site presentation
Weiguo Yu, Youngryel Ryu, Helin Zhang, Shaoyu Wang, and Huaize Feng

Near-real-time daily high-resolution estimates of crop gross primary productivity (GPP) are crucial for accurate biomass and yield estimation. The multiplicative combination of near-infrared reflectance of vegetation and photosynthetically active radiation (NIRvP) serves as a biophysically grounded proxy that enhances the responsiveness of GPP estimation. However, a mechanistic model for accurately estimating GPP using NIRvP remains lacking, which limits the potential for enhanced crop productivity assessments. In this study, we developed a model based on the NIRvP and eco-evolutionary optimality (EEO) theory (NIRvP-EEO) with Sentinel-2 imagery and meteorological data on Google Earth Engine for crop GPP estimation without the need for calibration. Specifically, we integrated the Ball-Berry stomatal conductance model into NIRvP-EEO to balance carbon and water vapor fluxes. To enable near-real-time daily monitoring of crop GPP, we employed temporal-weighted interpolation and Whittaker-smoothing filtering methods to fill data gaps. Compared to benchmark models such as enhanced SatelLite Only Photosynthesis Estimation (ESLOPE), crop SLOPE (CSLOPE), GPP network (GPP-net) and P-model, the NIRvP-EEO model demonstrated improved daily GPP estimation for four major crops including corn, soybean, wheat and rice. We found that NIRvP-EEO could reliably GPP estimation not only in drought and heatwave years but also in flood years. Additionally, the model effectively captures the fine spatial details and interannual variations in GPP for these crops. By leveraging the Google Earth Engine platform, our model enables conduct near-real-time daily continuous monitoring of crop GPP at a high spatial resolution anywhere in the world.

How to cite: Yu, W., Ryu, Y., Zhang, H., Wang, S., and Feng, H.: NIRvP-EEO: An NIRvP-based eco-evolutionary optimality model for near-real-time daily crop gross primary productivity estimation at field scale, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-8451, https://doi.org/10.5194/egusphere-egu26-8451, 2026.

15:40–15:45

Posters on site: Fri, 8 May, 10:45–12:30 | Hall X1

The posters scheduled for on-site presentation are only visible in the poster hall in Vienna. If authors uploaded their presentation files, these files are linked from the abstracts below.
Display time: Fri, 8 May, 08:30–12:30
Chairpersons: Helge Aasen, Stefan Erasmi, Sheng Wang
X1.62
|
EGU26-16890
Sheng Wang and the PANGEOS Aarhus workshop working group

Hyperspectral remote sensing provides opportunities for accurate and non-destructive retrieval of crop biophysical and biochemical traits based on physical radiative principles; yet robust and transferable retrieval approaches remain challenging. In this study, we systematically compared physically based, data-driven, and hybrid retrieval strategies for estimating leaf chlorophyll content (Cab) and leaf area index (LAI) from 400–2400 nm canopy hyperspectral reflectance from a field spectrometer. Using multi-temporal field observations of potato as a model crop collected across two experimental sites in the Netherlands under contrasting nitrogen and irrigation regimes, we evaluated (i) radiative transfer model inversion using Soil Canopy Observation, Photochemistry, and Energy fluxes (SCOPE) model, (ii) pure data-driven approaches including bidirectional long short-term memory networks (Bi-LSTM) and Gaussian Process Regression (GPR), and (iii) two hybrid methods integrating radiative transfer simulations with machine learning, including GPR hybrid learning and a radiative transfer process-guided machine learning (PGML) framework. Results show that among the data-driven methods, GPR has better performance than Bi-LSTM for Cab retrieval, and slightly lower performance in LAI retrieval. PGML outperformed purely physical and data-driven methods, achieving the highest accuracy for Cab (R² = 0.81, RMSE = 5.41 μg cm⁻²) and LAI (R² = 0.53, RMSE = 0.64 m² m⁻²) in 10-fold cross-validation while requiring limited field measurements. Feature importance analysis revealed that PGML emphasized spectrally and biophysically meaningful regions, including the near-infrared plateau for LAI and the red-edge for Cab. Furthermore, hybrid-derived traits exhibited strong correlations with end-of-season potato yield across key growth stages, comparable to or exceeding those obtained from field measurements. These findings demonstrate the value of hybrid learning for improving the robustness and interpretability of hyperspectral trait retrieval, supporting scalable crop monitoring and precision agriculture applications.

How to cite: Wang, S. and the PANGEOS Aarhus workshop working group: Retrieving Crop Traits from Canopy Hyperspectral Reflectance: A Comparative Assessment of Physical, Data-Driven, and Hybrid Models, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-16890, https://doi.org/10.5194/egusphere-egu26-16890, 2026.

X1.63
|
EGU26-18077
|
ECS
Yi-Chun Chen and Kuan-Hui Lin

In recent years, the growing global emphasis on biodiversity conservation within Social–Ecological Systems (SES) has catalyzed the development of Long-Term Socio-Ecological Research (LTSER). However, effectively integrating social and ecological data remains a significant challenge. Agroecosystems represent a classic example of human-nature coupled systems, where human agricultural management serves as the core driver. Despite this, most existing research focuses on the broad social or environmental impacts of agriculture, with relatively little attention paid to how specific management practices disturb the activity of local species.

This study focuses on the disturbances caused by agricultural management practices, including pruning, fertilization, pesticide application, and weeding, on avian activities within tea plantations. To achieve high temporal resolution, we utilize Passive Acoustic Monitoring (PAM) to collect soundscape data. These recordings are processed using SILIC, an AI-based biological sound identification and labeling system, to extract precise species and activity information.

To evaluate the short-term impacts of these practices, the research employs Bayesian proportion tests to compare changes in avian habitat occupancy before and after specific management interventions. Furthermore, this study aims to identify bird species that are particularly sensitive to certain agricultural activities and analyze their activity patterns. The findings will serve as a practical reference for conservation and agricultural authorities, enabling the optimization of management schedules to avoid peak avian activity periods and minimize ecological disturbance.

 

Keywords: passive acoustic monitoring, automatic species identification, agricultural management practices, indicator species, tea garden, socio-ecological systems

How to cite: Chen, Y.-C. and Lin, K.-H.: Using Passive Acoustic Monitoring to Identify Avian Indicators for Reflecting Agricultural Management Practices : A Case Study in Tea Garden of Pinglin, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-18077, https://doi.org/10.5194/egusphere-egu26-18077, 2026.

X1.64
|
EGU26-6972
|
ECS
Diego Quintero, Vasileios Sitokonstantinou, Jens A. Andersson, and Ioannis N. Athanasiadis

Smallholder farmers are responsible for 69% of the food produced in Tanzania, yet their productivity remains constrained by low soil fertility and limited economic access to inputs. While fertilizers are essential for achieving higher yields, suboptimal management can lead to environmental degradation and economic losses for the farmer. Therefore, optimizing the agronomic efficiency of fertilizers, specifically the question of the ideal dose and timing, is critical for the sustainable intensification of smallholder agriculture.  While on-farm field experiments are the gold standard to address this question, they are often prohibitively expensive, labor-intensive, geographically limited, and unable to account for farmer management differences. Causal Machine Learning offers a robust alternative that uses observational data by integrating the rigor of causal inference with the flexibility of Machine Learning. This approach is designed to overcome the selection bias present in observational data and some of the restrictive assumptions of standard statistical approaches. 
In this study, we analyze observational survey data from smallholder maize farmers in Tanzania (2023-24 season) using a Double Machine Learning approach to estimate conditional average treatment effects, identifying how Nitrogen and Phosphorus fertilizer response varies across different dose and timing regimes. Our findings show an average agronomic efficiency of 18 kg grain/kg of applied Nitrogen and 60 kg grain/kg of applied Phosphorus; results that closely align with established benchmarks from regional field trials. More importantly, our model captures management-driven heterogeneity. The results demonstrate that split applications of  Nitrogen –at planting/emergence and two times before silking– are more likely to provide higher efficiencies, while Phosphorus reaches peak efficiency when applied during the earliest development stage. Furthermore, the estimated dose-response curves exhibit characteristic diminishing returns; this showcases the framework’s ability to recover complex non-linear biophysical patterns. The successful recovery of these well-known agronomic insights from noisy observational data serves as a validation of the Causal Machine Learning framework for this specific context. This success demonstrates the potential to address increasingly complex agronomic challenges, utilizing existing datasets to identify site-specific patterns that provide a robust foundation for personalized optimal management.  

How to cite: Quintero, D., Sitokonstantinou, V., Andersson, J. A., and Athanasiadis, I. N.: A Causal Machine Learning approach for estimating the heterogeneous effects of fertilizer dose and timing on smallholder maize yields in Tanzania, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-6972, https://doi.org/10.5194/egusphere-egu26-6972, 2026.

X1.65
|
EGU26-8816
|
ECS
Elakkiyaa Thiyagarajan Logambal and Debsunder Dutta

Soil properties such as soil organic carbon (SOC) and clay content are key indicators of ecosystem functioning, land degradation, and environmental change. Advances in spaceborne hyperspectral remote sensing enable new possibilities for large-scale soil property monitoring. However, differences in sensor characteristics, acquisition conditions, and surface heterogeneity continue to limit the transferability of retrieval models across regions and observation systems. This study investigates the role of spectral preprocessing in improving the transferability of soil property estimation using multi-source spectral data. We evaluate continuum removal (CR) and first-derivative (FD) transformations to improve the interpretability and alignment of diagnostic soil absorption features in laboratory and satellite reflectance spectra. Using different spectral datasets, we assess the impact of preprocessing on feature comparability, predictive performance, and robustness under varying data distributions. We further examine how spectral heterogeneity and distribution shifts influence model generalization. Our results demonstrate that robust preprocessing improves the comparability of spectral features and strengthens model transferability. These findings highlight the importance of sensor-independent preprocessing strategies for reliable and scalable soil property mapping using multi-source remote sensing data in environmental monitoring applications.

How to cite: Thiyagarajan Logambal, E. and Dutta, D.: Towards Transferable Soil Property Estimation from Multi-Source Spectral Data, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-8816, https://doi.org/10.5194/egusphere-egu26-8816, 2026.

X1.66
|
EGU26-15448
Lian Song, Chuang Cai, Zhengjun Wang, Hao Chen, Jiahui Yuan, Rui Wang, Yang Liu, Qian Zhang, and Yu Wang

Biochar application and straw return are widely promoted as sustainable fertilization practices to enhance crop production, yet their impacts on growth processes, structural traits and physiological functioning remain insufficiently quantified, particularly from a canopy-scale perspective. During the 2025 rice-growing season, we conducted a field experiment in Yixing, located in the lower Yangtze River region, to investigate rice functional responses to biochar and straw return and to evaluate the capability of sun-induced chlorophyll fluorescence (SIF) to detect these responses.

The experiment included conventional fertilization as a control, three biochar application rates (0.10%, 0.50%, and 1.00%), and partial straw return. Biochar and straw return substantially enhanced root biomass (by 25–37%) and leaf area index (by 10–31%) across key growth stages, indicating improved resource acquisition capacity and canopy development. These belowground-driven changes translated into increased aboveground biomass accumulation, particularly before heading, and higher panicle density, contributing to yield formation. At the same time, biochar application increased canopy temperature and reduced leaf chlorophyll content, suggesting altered nitrogen distribution and canopy energy balance under intensified growth conditions. High biochar application reduced grain filling percentage, indicating that productivity gains are constrained by physiological regulation during reproductive stages.

To characterize canopy-scale functional dynamics, unmanned aerial vehicle (UAV) campaigns were conducted at jointing, heading, and grain-filling stages to acquire SIF observations. SIF showed strong sensitivity to management-induced differences in canopy structure, biomass accumulation, and phenological progression, consistently reflecting treatment effects across growth stages. Importantly, SIF captured both enhanced canopy function under moderate biochar and straw return and constrained physiological performance under excessive application, demonstrating its ability to integrate multiple plant functional responses.

Our results show that biochar and straw return regulate rice productivity through coordinated changes in root development, canopy structure, and physiological functioning. UAV-based SIF provides an effective, non-destructive approach to monitor these management-driven functional responses, offering new opportunities to link field experiments with larger-scale assessments of sustainable agricultural practices.

How to cite: Song, L., Cai, C., Wang, Z., Chen, H., Yuan, J., Wang, R., Liu, Y., Zhang, Q., and Wang, Y.: Rice Functional Responses to Biochar and Straw Return in the Lower Yangtze Region Revealed by UAV-based Sun-Induced Chlorophyll Fluorescence, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-15448, https://doi.org/10.5194/egusphere-egu26-15448, 2026.

X1.67
|
EGU26-18055
|
ECS
Jonathan Renkel, Johannes Löw, Mike Teucher, and Christopher Conrad

Trees on agricultural land are key structural components of agroecosystems, contributing to essential ecosystem services like microclimate regulation, erosion control, biodiversity conservation, and the mitigation of climate-induced abiotic stresses, thereby enhancing the resilience of agricultural landscapes. However, existing inventories are often outdated, incomplete, and lack the spatial resolution necessary for in-depth analysis and effective decision-making. 
Therefore, we apply a semantic segmentation approach based on the U-Net architecture, to quantify the current spatial distribution of trees on agricultural lands across southern Saxony-Anhalt (approximately 4,000 km²). The model is based on official digital orthophotos (DOP) with 20 cm spatial resolution and a spectral resolution of four channels (RGBI).             
Given the large study area and the coarse repetition rate of aerial imagery, we further evaluate model performance across different acquisition dates, ranging from the beginning of the 2023 vegetation period (30.04. - spring) to the peak of the 2024 vegetation period (29.08. – late summer).
Training data generation uses a semi-automatic workflow: a normalized surface model is clipped into 512×512-pixel tiles, filtered to retain objects >4m height, and masked to exclude impervious surfaces. This produces 7,894 tiles containing 14,360 annotated features, which are manually verified against true-color imagery. An independent test set is created through manual digitization of agricultural trees, stratified by image acquisition date. Model performance is evaluated using Precision, Recall, F1-score, and Intersection over Union (IoU).
The dataset is split 70/30 for training/validation. Input data includes four channels (RGBI) and the Normalized Difference Vegetation Index (NDVI) as a fifth channel. Data augmentation applies random horizontal/vertical flips and rotations (±15°). The U-Net model is trained using focal Tversky loss (weighted to penalize both false positives and negatives) and the Adam optimizer with default learning rate.
Lowest model errors were reached after 48 epochs. The best-performing model is selected and subsequently applied to each DOP tile intersecting the study area, resulting in predictions for 1182 DOP tiles. First validation results on approximately 8000 reference polygons show an average F1-Score of 0.5 which is comparable to recent studies.          
A total area of 195 km² of trees on agricultural land are mapped. Despite the heterogeneity of acquisition dates, the model produces accurate segmentations and successfully identifies trees on agricultural land in different compositions. The results indicate that semiautomatic training data generation can compensate for seasonal variability in aerial images, which often hinders the application of deep learning models to larger spatial scales.

How to cite: Renkel, J., Löw, J., Teucher, M., and Conrad, C.: Mapping Trees on Agricultural Land Using U-Net Semantic Segmentation from Multitemporal RGBI Orthophotos in Southern Saxony-Anhalt, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-18055, https://doi.org/10.5194/egusphere-egu26-18055, 2026.

X1.68
|
EGU26-22005
|
ECS
Ajay Gautam, Bernardo Candido, Ushasree Mindala, Vandana Darapaneni, Kayan Baptista, Ellen Herring, Dan Evans, and Robert Kallenbach

Accurate pasture biomass prediction is central to precision grazing and sustainable land management. This study presents a multi-source biomass prediction model for Mid-Missouri test-site pasture by integrating field-based proximal height sensing, multispectral satellite derived vegetation indices and weather variables from 2024 - 2025. A ridge regression framework with L2 regularization addressed predictor multicollinearity, with cross-validated tuning yielding an R² of 0.92 and a mean absolute error of 388 kg/ha, representing an approximately 50 percent improvement over height-only models. These results confirm the effectiveness of fusing proximal, spectral, and meteorological data for paddock-scale biomass estimation. Further gains in prediction accuracy can be achieved through systematic expansion of the predictor space within the existing multi-source framework. Incorporation of synthetic aperture radar (SAR) metrics from Sentinel-1, including backscatter coefficients and spatial texture measures derived from gray-level co-occurrence matrices, is expected to improve sensitivity to canopy structure, surface roughness, and moisture dynamics while maintaining robustness under cloud cover. In addition, terrain-based variables, including elevation and slope, will further explain spatial variability in pasture growth. This integrated framework is expected to reduce residual uncertainty, improve model stability across seasons, and enhance species specific calibration, providing a scalable foundation for highly accurate pasture biomass prediction and advance sustainable pasture management practices.

How to cite: Gautam, A., Candido, B., Mindala, U., Darapaneni, V., Baptista, K., Herring, E., Evans, D., and Kallenbach, R.: Advancing pasture biomass prediction with integrated proximal, multispectral, topographic and SAR data fusion, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-22005, https://doi.org/10.5194/egusphere-egu26-22005, 2026.

X1.69
|
EGU26-12595
|
ECS
Haoran Meng, Joel Segarra, Shawn Carlisle Kefauver, and José Luis Araus Ortega

Large-scale and highly accurate wheat yield prediction is of great importance for ensuring food security, supporting agricultural policymaking, and guiding grain allocation. In recent years, the rapid development of remote sensing technologies and deep learning algorithms has provided powerful tools for large-scale crop yield prediction. However, crop yield is jointly influenced by multiple environmental factors, such as climate, soil, and topography. Existing studies often adopt simple feature concatenation or fixed-weight fusion strategies, lacking adaptive modeling of relative modality importance, which limits further improvement in prediction accuracy. To address this issue, this study proposes a Transformer-based multimodal adaptive Gated Fusion model (TMMGF). The model employs Transformers to model dynamic time series of remote sensing spectral data and climate variables, applies multilayer perceptrons (MLP) to handle static environmental factors including soil and topography. Multiple modalities are then integrated through a gated fusion mechanism to achieve adaptive weighted fusion. This study was conducted across the conterminous United States, based on county-level winter wheat yield records from 2008 to 2023. The TMMGF was systematically compared with an LSTM-based multimodal adaptive Gated Fusion model (MMGF), Transformer single-modal remote sensing model, Transformer single-modal climate model, MLP single-modal soil model, and MLP single-modal topography model. The results show that TMMGF achieves the best performance, with an average R² of 0.813, RMSE of 0.571 t/ha, and MAPE of 14.49% in 10-fold cross-validation, significantly outperforming the baseline models. In particular, compared with the LSTM-based multimodal model MMGF (R² = 0.796, RMSE = 0.598 t/ha, MAPE = 15.11%), TMMGF shows clear advantages in both accuracy and stability. This study demonstrates that a Transformer-based adaptive multimodal fusion framework can effectively integrate heterogeneous data sources and provides a promising technical pathway for high-accuracy large-scale wheat yield prediction.

How to cite: Meng, H., Segarra, J., Kefauver, S. C., and Araus Ortega, J. L.: Transformer-Based Adaptive Multimodal Fusion Model for Remote Sensing Winter Wheat Yield Prediction, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-12595, https://doi.org/10.5194/egusphere-egu26-12595, 2026.

X1.70
|
EGU26-8435
|
ECS
Pengfei Tang, youngryel ryu, shaoyu wang, Ryoungseob Kwon, and Kyungdo Lee

Achieving reliable in-season soybean maps is challenging in heterogeneous and data-poor
agricultural landscapes because of domain discrepancies and limited reference data. Traditional
vegetation index methods and supervised machine-learning approaches often lack robustness
for early-season prediction, while conventional unsupervised domain adaptation (UDA)
typically requires access to source-domain data, increasing computational and data-sharing
burdens. In this study, we introduce a source-free UDA framework, Contrastive Representation
Optimized Prototype Segmentation (CROPS), for large-scale, early-season soybean mapping
without relying on source data. CROPS utilizes NDVI-Max composites from the
QualityMosaic method to emphasize peak vegetation signals, reduce noise and redundancy, and
simplify preprocessing. A pixel-wise entropy partitioning strategy identifies high- and low-
confidence regions, enabling curriculum-based optimization within a teacher-student
architecture enhanced by Exponential Moving Average (EMA). Extensive experiments across
the USA, China, Brazil, and Argentina demonstrate that CROPS consistently surpasses
traditional indices, supervised classifiers, and established UDA methods. At the end of the
season, CROPS achieved average macro F1 scores exceeding 92%, closely matching official
agricultural statistics. Importantly, in South America, CROPS enables reliable early-season
mapping, with macro F1 above 90% in Brazil by mid-January and over 80% in Argentina by late
January. In the US Midwest, where the spectral similarity between soybean and maize makes
accurate classification particularly challenging during the growing season, CROPS achieves
robust in-season mapping for both crops. Ablation experiments reveal that this strong
performance is primarily attributed to the NDVI-Max composites’ ability to capture key
phenological features and to the progressive self-adaptive learning process, in which high-
confidence target-domain samples iteratively guide the low-confidence ones. This strategy
avoids negative transfer from source data and enhances adaptation to local characteristics.
These findings underscore the potential of CROPS as a timely, accurate, and scalable solution
for crop mapping in complex, data-limited environments.

How to cite: Tang, P., ryu, Y., wang, S., Kwon, R., and Lee, K.: A Source-Free Unsupervised Domain Adaptation Framework for Large-scale, in-season Soybean Mapping, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-8435, https://doi.org/10.5194/egusphere-egu26-8435, 2026.

X1.71
|
EGU26-20696
|
ECS
Chaoqun Zheng, Baojian Wu, Weihua Feng, Jianwei Wang, Yongsheng Wang, Hanghang Liu, and Leyi Zhang

Accurate mapping of high-value economic crops, such as tobacco, in complex mountainous regions is essential for sustainable precision agriculture and regional land-use management. However, identifying tobacco plots remains challenging due to spectral confusion among objects and insufficient segmentation accuracy in complex terrains encountered in traditional tobacco remote sensing image semantic segmentation. This study presents a deep learning framework designed to overcome these limitations by synergizing Unmanned Aerial Vehicle (UAV) imagery with multi-temporal satellite data.

We propose a novel semantic segmentation model. Specifically, by introducing a channel-spatial attention module, we enhance the feature discrimination between tobacco plants and background crops/bare land; by incorporating an adaptive convolution module, we improve the model's adaptability to complex terrains. To validate the model's performance, a dedicated semantic segmentation dataset for tobacco remote sensing imagery was constructed. Results on this dataset demonstrate that the proposed model outperforms mainstream segmentation models such as U-Net and DeepLabv3+, achieving an improvement of 5% in mean Intersection over Union (mIoU).

The framework offers a scalable, automated solution for monitoring economic crops in heterogeneous environments, providing critical spatial intelligence for crop yield estimation and agricultural policy-making in challenging mountainous terrains.

How to cite: Zheng, C., Wu, B., Feng, W., Wang, J., Wang, Y., Liu, H., and Zhang, L.: Mapping the Spatial Distribution of Tobacco using Multi-modal Satellite Imagery and Deep Learning, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-20696, https://doi.org/10.5194/egusphere-egu26-20696, 2026.

X1.72
|
EGU26-6378
|
ECS
Zhican Li

Sugarcane is a vital sugar crop globally, yet its large-scale remote sensing monitoring is often hindered by the high costs of field sampling and the difficulty of reusing historical data across regions.Due to variations in climatic conditions and planting practices, sugarcane exhibits significant spatiotemporal phenological shifts across different regions, causing a sharp decline in the accuracy of traditional supervised classification models when applied cross-regionally. To address this challenge, this study proposes a Phenology-Constrained Joint Distribution Adaptation (PC-JDA) method that integrates biological mechanisms with transfer learning. Building upon the standard JDA algorithm, we innovatively introduce prior phenological knowledge as a constraint mechanism. Specifically, we utilize Dynamic Time Warping (DTW) to quantify phenological similarities between the source and target domains. Furthermore, during the iterative optimization process of JDA, standard NDVI time-series curves of sugarcane are employed to screen and correct the pseudo-labels generated for the target domain, thereby mitigating negative transfer effects. Experimental results transferring from Fusui (source) to Xuwen (target) demonstrate that this method effectively aligns the feature distributions of sugarcane between regions. It significantly improves identification accuracy in the target domain without labeled samples, providing a feasible and cost-effective solution for cross-regional crop mapping.

How to cite: Li, Z.: Cross-Regional Sugarcane Identification via Phenology-Constrained Joint Distribution Adaptation Method, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-6378, https://doi.org/10.5194/egusphere-egu26-6378, 2026.

X1.73
|
EGU26-2180
Kwang-Hyung Kim, Noh-Hyun Lee, and Wonjae Jeong

Brassica rapa, known as Kimchi cabbage, is an important cash crop in South Korea. However, climate change has inflicted major abiotic stresses on cabbage production, resulting in physiological effects that often decrease yield and quality. To overcome these challenges, the effects of individual stresses on cabbage production must be investigated through simulation modeling and other approaches. In this study, we aim to clarify the historical and future patterns of abiotic stress to assess its effects on cabbage production in Korea. To this end, different stress index models were adopted and compared to estimate the occurrence patterns of each abiotic stress and assess their impacts on cabbage production. Our machine-learning modeling analyses revealed that approximately 62% of the variation in historical cabbage productivity can be attributed to individual abiotic stresses. The relative impact of each stress on productivity has not changed significantly over the past 40 years (1981–2020), with slight increasing or decreasing trends in major stresses. Among the abiotic stresses, the low-temperature injury and wetness stress have largely affected the cabbage productivity by 2020, followed by drought, high-temperature injury (HTI), and frost stresses. Projections based on future climate change scenarios suggest a substantial increase in HTI stress, surpassing the levels observed over the past 40 years, while other stressors are expected to either persist at similar levels, or decrease or increase slightly. This study underscores the increasing need to effectively manage these stressors, particularly those that have a greater impact on productivity and are projected to exceed their historical ranges, in order to ensure the successful future production of cabbage in Korea.

How to cite: Kim, K.-H., Lee, N.-H., and Jeong, W.: Effect of abiotic stresses on Brassica rapa production in Korea: Learning from history to better prepare for the future impacts of climate change, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-2180, https://doi.org/10.5194/egusphere-egu26-2180, 2026.

X1.74
|
EGU26-4894
|
ECS
A spatially explicit negative‑prognosis framework for Cercospora leaf spot using remotely sensed leaf area index
(withdrawn)
Rene Heim, Paul Melloy, Facundo Ramón Ispizua Yamati, Nathan Okole, Alexey Mikaberidze, and Anne-Katrin Mahlein

Posters virtual: Tue, 5 May, 14:00–18:00 | vPoster spot 2

The posters scheduled for virtual presentation are given in a hybrid format for on-site presentation, followed by virtual discussion on Zoom. Attendees are asked to meet the authors during the scheduled presentation & discussion time for live video chats; onsite attendees are invited to visit the virtual poster sessions at the vPoster spots (equal to PICO spots). If authors uploaded their presentation files, these files are also linked from the abstracts below. The button to access the Zoom meeting appears 15 minutes before the time block starts.
Discussion time: Tue, 5 May, 16:15–18:00
Display time: Tue, 5 May, 14:00–18:00

EGU26-11249 | ECS | Posters virtual | VPS5

How Soil Quality Affects Long-Term Rice Productivity 

Saheed Garnaik, Prasanna Kumar Samant, Mitali Mandal, Ravi H Wanjari, Nishant Kumar Sinha, Monoranjan Mohanty, and Narendra Kumar Lenka
Tue, 05 May, 14:54–14:57 (CEST)   vPoster spot 2

Sustaining rice productivity in intensive rice-rice systems requires comprehensive soil management, with diagnosis of key soil physical, chemical, and biological indicators that need attention. In a 16-year long-term experiment (established in 2005-06 and ongoing) of the irrigated double rice system of Eastern India, we investigated the effect of key soil drivers on rice productivity.

The experiment assessed the effect of control (no N fertilizer application), imbalanced fertilization (N/NP/PK), balanced and recommended NPK and 150% NPK, NPK with lime, micronutrient additions (Zn with/without S or B), and integrated nutrient management with FYM (with/without lime), Composite surface soil samples (0-15cm) were collected after harvest of the 32nd rice season for evaluation of soil physical, chemical, and biological properties. Rice grain yield after the 32nd season was recorded at 14% grain moisture.  

To identify key soil drivers, an interpretable machine learning framework was used, specifically a conditional random forest-based yield model, permutation-based variable importance, and accumulated local effect (ALE) plots. The model described the yield variability very well (mean RMSE 305 kg ha-1, R2 0.88, MAE 254 kg ha-1). Variable importance screening highlighted total K, protease, and urease activities, as well as permanganate-oxidizable carbon (POC), as dominant predictors. ALE-based effect sizes suggested these properties accounted for ~400 (total K), ~250 (protease), ~200 (urease), and ~140 (POC) kg yield variability.

Overall, the results indicate that potassium dynamics are a primary constraint in intensive rice-rice systems, with risks associated with continuous K mining, and emphasize the importance of routine monitoring of biological activity indicators for long-term sustainability.

Keywords: Conditional random forest; Soil quality index (SQI); Long-term fertilizer application; K-dynamics; Soil enzymes; Cattle manure

How to cite: Garnaik, S., Samant, P. K., Mandal, M., Wanjari, R. H., Sinha, N. K., Mohanty, M., and Lenka, N. K.: How Soil Quality Affects Long-Term Rice Productivity, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-11249, https://doi.org/10.5194/egusphere-egu26-11249, 2026.

EGU26-1693 | Posters virtual | VPS5

Evaluating different methodological approaches for very high spatial resolution mapping of agricultural areas exploiting UAV data: a case study from Greek agricultural site 

Pileas Charisoulis, George P. Petropoulos, Spyridon E. Detsikas, Eleftheria Volianaki, and Antonis Litke
Tue, 05 May, 14:57–15:00 (CEST)   vPoster spot 2

The rapid technological developments of recent years have enabled new methods for acquiring aerial photographs and high-spectral-resolution imagery. In this context, unmanned aerial vehicles (UAVs) offer significant potential for high-resolution Land Use/Land Cover (LULC) mapping, allowing clear distinction between natural and human-made features. UAV-based approaches provide high accuracy, faster data acquisition, and cost-effective solutions for detailed LULC analyses. However, there is a fertile ground in evaluating different methodological approaches and testing different algorithms for obtaining robust and transferable results. To this end, the present study aims at comparing two advanced classification techniques for mapping agricultural areas using multispectral UAV data over a typical agricultural site. The area selected for the study consists of crops and agricultural land located near the town of Amygdales, in the regional unit of Grevena. The two techniques are SVM (Support Vector Machines) and MLC (Maximum Likelihood Classification). In overall, results showed that the SVM proved to be more accurate with an overall accuracy of 79.45% compared to 78.91% for MLC, while both methods achieved a Kappa coefficient of 0.72. The statistical significance of the findings was further confirmed from the Mc-Nemar statistical significance results which were also computed. The results evidenced the capability of both methods obtaining LULC maps at very high spatial resolution. All in all, the methodological approach presented herein provides potentially a low-cost solution in mapping agricultural areas at very high spatial resolution which may be also fully transferable and reproducible to other locations too, which offer potentially important pathways to be used in precision agriculture applications. Such information can be of practical value to both farmers and decision-makers in reaching the most appropriate decisions for field management.

Keywords: Precision Agriculture, Mapping, UAVs, Classification, Machine Learning, Support Vector Machine, Maximum Likelihood


Acknowledgement

The participation of George P. Petropoulos study is financial supported by supported by the ACCELERATE MSCA SE program of the European Union’s Horizon research and innovation program under grant agreement No. 101182930.

How to cite: Charisoulis, P., Petropoulos, G. P., Detsikas, S. E., Volianaki, E., and Litke, A.: Evaluating different methodological approaches for very high spatial resolution mapping of agricultural areas exploiting UAV data: a case study from Greek agricultural site, EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-1693, https://doi.org/10.5194/egusphere-egu26-1693, 2026.

EGU26-3144 | Posters virtual | VPS5

Assessing the effect of different ground sampling distances for drone-based mapping of fractional cover: a case study from a vineyard field in Northern Greece  

Georgios-Nektarios Tselos, Spyridon E. Detsikas, George P. Petropoulos, Konstantinos Grigoriadis, Vassilios Polychronos, Elisavet-Maria Mamagiannou, Panagiota Balomenou, Dimitrios Ramnalis, and Petros Masouridis
Tue, 05 May, 15:00–15:03 (CEST)   vPoster spot 2

Monitoring the fractional cover of vegetation and bare soil is essential for sustainable land management, soil erosion control, and precision agriculture. However, accurately estimating these fractions from conventional satellite imagery is challenging due to mixed ground cover and limitations such as cloud contamination and coarse spatiotemporal resolution. High-resolution UAV imagery provides an effective solution by capturing fine-scale heterogeneity, enabling the application of spectral mixture modeling techniques to decompose each pixel into proportions of vegetation, bare soil, and other components. Understanding how GSD influences the performance of such mixture models is critical for optimizing UAV-based monitoring strategies and ensuring reliable, quantitative estimates of soil and vegetation fractions for informed land management decisions.

The objective of this study is to evaluate the sensitivity of fractional vegetation estimates to different ground sampling distances (GSDs) derived from unmanned aerial vehicle (UAV) imagery. Multispectral data were acquired using a UAV equipped with RGB, near-infrared, red, and red-edge sensors, flown at altitudes of 40 m, 80 m, and 120 m above ground level. The study area is a heterogeneous vineyard located in Drama, Macedonia, northern Greece. Image acquisition took place on 30 July 2025, under stable atmospheric and illumination conditions.

An object-based image analysis (OBIA) approach was applied to the UAV imagery, and the data were classified into three main land cover classes: photosynthetic vegetation, non-photosynthetic vegetation, and bare soil. Fractional vegetation cover estimates derived at each flight altitude were compared in order to assess the influence of spatial resolution on classification performance and vegetation fraction retrieval. Validation of the classification results was performed using an independent dataset generated through direct photo interpretation, allowing for an objective assessment of accuracy across the different GSDs.

This contribution aims to evaluate the effects of different ground sampling distances (GSDs) on the estimation of fractional vegetation cover (FVC) using multispectral UAV imagery over commercial vineyards in Northern Greece. The study highlights the influence of spatial resolution on canopy representation, particularly in young or sparsely developed vineyards, and supports the development of robust UAV-based tools for precision viticulture

Keywords: UAV, Vineyard, Fractional Vegetation Cover , ACCELERATE

Acknowledgement: This study is supported by ACCELERATE research project which has received funding from the European Union’s Horizon  research and innovation program under grant agreement No.101182930.

How to cite: Tselos, G.-N., Detsikas, S. E., Petropoulos, G. P., Grigoriadis, K., Polychronos, V., Mamagiannou, E.-M., Balomenou, P., Ramnalis, D., and Masouridis, P.: Assessing the effect of different ground sampling distances for drone-based mapping of fractional cover: a case study from a vineyard field in Northern Greece , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-3144, https://doi.org/10.5194/egusphere-egu26-3144, 2026.

EGU26-13704 | ECS | Posters virtual | VPS5

Data-driven modelling to quantify soil organic carbon in burnt croplands: An integration of remote sensing and machine learning  

Jayantifull Hoojon, Mukund Narayanan, and Idhayachandhiran Ilampooranan
Tue, 05 May, 15:03–15:06 (CEST)   vPoster spot 2

Stubble burning after harvest is known to degrade soil organic carbon (SOC). However, research on its long-term impacts on SOC is scarce and inconclusive. To address this gap, we introduce a data-driven modeling approach for SOC quantification by integrating remote sensing data with machine learning models to quantify changes in SOC during 2004-2021 across burnt rice areas in Punjab, India. This involved synthesizing literature to obtain SOC values pre- and post-burning, as well as intersecting MODIS burnt areas with rice crop maps to identify stubble burning areas in Punjab from 2004 to 2021. Post synthesis and identification, MODIS satellite band values were extracted for the synthesized experimental plots on pre- and post-burning dates. Further, remote sensing indices, which are sensitive to SOC changes such as NDVI, NBR, RECI, and BSI, were calculated for the pre- and post-burning dates. Using these indices and band values as predictors and literature-derived observed SOC values as response variables, multiple machine learning models were trained, whereby an R2 value of 0.3 was obtained. While further efforts are required to improve model accuracy, our study revealed a significant decline in SOC from 2004 to 2018, ranging from 0.1  to 12.5 %,  whereas from 2019 to 2021, SOC increased by 0.7 to 7 % in various districts in Punjab. More specifically, these districts-Sangrur, Ludhiana, and Kapurthala have had the most significant decline from 2014 to 2018, whereas Rupnagar, Patiala, and Fatehgarh Sahib exhibit the highest increase in SOC from 2019 to 2021. The decline in SOC could be attributed to accelerated mineralization driven by combustion and the loss of SOC in the form of CO2 emissions. Whereas the increase in SOC could be attributed to a reduction in stubble burning and incomplete combustion of residue, leading to the return of unburnt organic matter to the soil. These findings highlight the efficacy of integrating remote sensing frameworks with data-driven machine learning models in monitoring SOC and other aspects of soil health.

How to cite: Hoojon, J., Narayanan, M., and Ilampooranan, I.: Data-driven modelling to quantify soil organic carbon in burnt croplands: An integration of remote sensing and machine learning , EGU General Assembly 2026, Vienna, Austria, 3–8 May 2026, EGU26-13704, https://doi.org/10.5194/egusphere-egu26-13704, 2026.

Login failed. Please check your login data. Lost login?