Description:
Join our Copernicus LPS22 Breakfast hosted by ESA and European Commission to catch up on the latest news in and kick off the LPS22 Copernicus Day.
Description:
Join the ESA Climate Office for breakfast to kick off Climate day at the LPS. Network with experts across the Earth Observation, climate science and modelling communities, find out about the latest advances in satellite technology and science-based information for climate services and decision-making. We will highlight what to expect in the climate sessions coming up in the LPS agenda for the day.
The European Space Agency (ESA)’s wind mission, Aeolus, was launched on 22 August 2018. It is the fifth of the ESA Earth Explorer family of missions and its main objective is to demonstrate the potential of Doppler Wind Lidar (DWL) in space for improving weather forecasts and the understanding of the role of atmospheric dynamics in climate variability. Aeolus carries a single instrument called the Atmospheric LAser Doppler INstrument (ALADIN): a high spectral resolution DWL which operates at 355nm which is the first of its kind to be flown in space.
Aeolus provides profiles of single horizontal line-of-sight winds (primary product) in near-real-time (NRT), and profiles of atmospheric backscatter and extinction. The instrument samples the atmosphere from about 30 km down to the Earth’s surface, or down to optically thick clouds. The required precision of the wind observations is 1-2.5 m/s in the troposphere and 3-5 m/s in the stratosphere while the systematic error requirement is less than 0.7 m/s. The mission spin-off products include information about aerosol and cloud layers. The satellite flies in a polar dusk/dawn orbit (6 am/pm local time ascending node), providing ~16 orbits per 24 hours with an orbit repeat cycle of 7 days. Global scientific payload data acquisition is guaranteed with the combined usage of Svalbard and Troll X-band receiving stations.
After more than 3 years in orbit and despite some performance issues related to its instrument ALADIN, Aeolus has achieved its objectives and designed-life time. Positive impact on the weather forecast has been demonstrated by multiple NWP centres world-wide. Four of the main European meteorological centres are now assimilating Aeolus winds operationally thanks to the excellent data timeliness and continuous improvement of the ground based processors.
The status of the Aeolus mission will be presented, including overall performance, planned operations and exploitation. Scope of the paper is also to inform about the programmatic highlights and future challenges until the end of its operational life time.
The European Space Agency’s (ESA) Aeolus satellite was launched on 22 August 2018 from Centre Spatial Guyanais in Kourou, French Guyana. The first atmospheric returns and first wind measurements were retrieved within the first two weeks of operations after launch. Aeolus has continually provided wind measurements since this time with only limited outages in availability. Aeolus successfully met it’s nominal mission lifetime at the end of November 2021, and it has been agreed to extend the mission operations until the end of 2022.
It was discovered that the atmospheric return signals from the ALADIN instrument were lower than expected before launch by a factor of x2 to x3. Extensive investigations were performed which led to the conclusion that the most likely cause for the loss in signal was an over-illumination of the 88µm diameter instrument field stop. In addition to this initial loss, it was found that the output energy of the first flight laser transmitter (FM-A) was decreasing. An investigation showed that this decrease was due to a continual misalignment of the laser master oscillator. The decrease in the FM-A UV energy and the resulting degradation of the wind measurement random error led to the decision to change to the second flight laser (FM-B) in June 2019. The FM-B laser has performed well in the subsequent 2.5yrs of operations retaining UV output energies above 60mJ and has recently provided UV output energies around 90mJ, corresponding to the levels that were reached in the on-ground thermal vacuum performance tests.
Despite the good performance of the FM-B transmitter, losses in the atmospheric return signal were evident. These losses were eventually traced to the emit path of the instrument (i.e. those optics after the laser transmitter up to the telescope output), largely due to independent measurements of the ALADIN emit energy by the Pierre Auger Observatory in Argentina, which indicated some 50% reduction in the emit energy between 2019 and 2022. This has led to recent interest to revert back to the FM-A transmitter (correcting for the misalignment) or to lowering the Aeolus altitude from the current 320km to 255km, in order to improve the signal returns and extend the mission duration.
Several issues with systematic bias in the wind measurements were also discovered. Pixels with slightly increased dark current levels in the memory zone of the Accumulation Charge-Coupled Devices (ACCDs), so called “hot pixels”, were found, with a new hot pixel arising approximately every 2-3 weeks. A novel pseudo-dark current correction method was employed in order to correct for these hot pixels. An important sub-orbital bias caused by differing background illuminations of the ALADIN telescope due to albedo variations was also discovered. This was mitigated by employing a correction which utilised the ALADIN telescope temperatures.
The Aeolus data has been extensively analysed by a number of meteorological centres and found to have a positive impact on NWP forecasts, particularly in the tropics and polar regions. These very positive results, along with the successful in-orbit demonstration of the measurement concept and associated technologies utilised on Aeolus, resulted in a statement of interest from EUMETSAT in a future, operational DWL mission in the 2030 to mid-2040’s timeframe.
This paper will summarise the performance of Aeolus’ ALADIN instrument in its 3+ years of operations, detailing the issues that have been described briefly above, as well as the mitigation actions taken, and drawing lessons learned for a future operational Doppler wind lidar mission.
After more than three years of operations of ESA’s Wind mission, launched on 22 August 2018, the Aeolus Payload Data Ground Segment continues to ensure successfully global X-band data acquisition with the combined usage of the Svalbard and Troll X-band ground stations, seamless mission planning operations, uninterrupted systematic science data production in Near Real Time (within 3 hours from sensing) and easy data access and discovery.
The Payload Data Ground Segment is based on a distributed architecture that foresees the first level of processing up to the preliminary wind observations and scientific aerosol/cloud profile products combined with the telemetry acquisition service, hosted and operated by KSAT in Norway. The second level of processing including the scientific wind observations is performed by the European Centre for Medium-Range Weather Forecasts (ECMWF) based in England for further assimilation and usage by multiple Numerical Weather Prediction (NWP) centres world-wide.
The Aeolus data is made available to meteo and expert users via the ESA Earth Observation Gateway through a dedicated dissemination system and archived for data preservation. Data quality. monitoring calibration and scientific processors evolution is performed by an international expert consortium, the Aeolus DISC (Data, Innovation, and Science Cluster).
The PDGS team at the ESA/ESRIN centre in Frascati has been responsible of the coordination and execution of the Payload Data Ground Segment operations, including the deployment in operations of various processing baselines and the public dissemination to users of the Aeolus products once the relevant quality standards were met. In addition, the PDGS team is responsible for the challenging mission planning of the instruments operations and has supported the execution of three reprocessing campaigns allowing mission data to be made available to users in the latest processing baselines, ensuring the highest possible data quality standards.
Along the years efforts have been put also in providing the Aeolus data user community with a modern concept of extended access to Earth Observation (EO) data. The VirES for Aeolus system provides a highly interactive data manipulation and retrieval web interface for the official Aeolus data products. The VirES Service will be extended with a Virtual Research Environment (VRE) that will become operational at the beginning of 2022 in order to provide data manipulation capabilities to users.
The architecture, current status and performance of the Aeolus PDGS will be presented, with focus on the main activities performed in more than three years of operations of the ground segment of this extremely challenging mission.
Assuring and improving Aeolus' data quality is a collaborative effort of the ESA Aeolus team, the Aeolus DISC (Data Innovation and Science Cluster) as well as more than 30 individual Cal/Val teams. The Aeolus satellite has brought cutting edge UV DWL (Doppler Wind Lidar) technology to space for the first time. This required to be prepared and committed to support and maximise the performance from the ground, being ready for unknown biases and error product-sources. ESA, DISC, Cal/Val as well as the ASAG (Aeolus Science and Data Quality Advisory Group) have worked together closely during mission in order to react to findings from one of the performance monitoring tools, such as the monitoring of Aeolus winds against winds from Numerical Weather Predication centers (NWP monitoring), and the DISC has half yearly released new processing baselines, each of them improving the data quality and/or compensating for newly discovered anomalies and biases, but also reacting to performance decreases of the Aladin instrument over time. The presentation will introduce the many groups involved in maximizing Aeolus' data quality via on-ground activities, and their tasks, along with results, highlight and achieved milestones.
The Aeolus DISC (Data, Innovation, and Science Cluster) is a core element in ESA's data quality framework for the Aeolus mission, comprised of an international expert consortium to study and improve the data quality of Aeolus products. The tasks of the Aeolus DISC are various and include among others the instrument and data quality monitoring, the calibration and characterization of the instrument, and the refinement of the retrieval algorithms and the processor evolution. Additionally, the Aeolus DISC supports ESA with data reprocessing campaigns, provides support to data users and performs impact assessment studies.
The Aeolus DISC's mission experts and expert centres have long track records in supporting and contributing to Aeolus, covering instrumental aspects, laser-atmosphere interaction, calibration and validation (e.g. aircraft campaigns), wind product and processor development, as well as the development of optical, aerosol and cloud processors and products.
In this presentation, we will summarize the achievements of the Aeolus DISC for the data quality of the various Aeolus products. We will especially focus on how the constant instrument and data monitoring supported the evolution of the Aeolus processing chain. Past, present and future processor changes will be described and their impact on Aeolus NRT and reprocessed data quality will be illustrated.
Various processor improvements, developed by the Aeolus DISC after launch, lead to a drastic reduction of the systematic bias of the Aeolus wind products down to below 1 m/s on a global scale. Examples of such processor improvements are the hot pixel correction for enhanced dark current rates observed on the Aeolus detectors (Weiler et al., 2021a) and a correction for thermal changes of the instrument’s large telescope along the orbit (Weiler et al., 2021b).
Additionally, for aerosol and cloud retrievals, current processors were optimized and new processing routines were developed (Flament et al., 2021, and Ehlers et al., 2021). In particular, this includes a new feature mask and an optimal estimation retrieval based on EarthCare algorithms. Both products will be fully functional for the spring 2022 baseline release.
Weiler, F., Kanitz, T., Wernham, D., Rennie, M., Huber, D., Schillinger, M., Saint-Pe, O., Bell, R., Parrinello, T., and Reitebuch, O.: Characterization of dark current signal measurements of the ACCDs used on board the Aeolus satellite, Atmos. Meas. Tech., 14, 5153–5177, https://doi.org/10.5194/amt-14-5153-2021, 2021a.
Weiler, F., Rennie, M., Kanitz, T., Isaksen, L., Checa, E., de Kloe, J., Okunde, N., and Reitebuch, O.: Correction of wind bias for the lidar on board Aeolus using telescope temperatures, Atmos. Meas. Tech., 14, 7167–7185, https://doi.org/10.5194/amt-14-7167-2021, 2021b.
Flament, T., Trapon, D., Lacour, A., Dabas, A., Ehlers, F., and Huber, D.: Aeolus L2A Aerosol Optical Properties Product: Standard Correct Algorithm and Mie Correct Algorithm, Atmos. Meas. Tech. Discuss. [preprint], https://doi.org/10.5194/amt-2021-181, in review, 2021.
Ehlers, F., Flament, T., Dabas, A., Trapon, D., Lacour, A., Baars, H., and Straume-Lindner, A. G.: Optimization of Aeolus Optical Properties Products by Maximum-Likelihood Estimation, Atmos. Meas. Tech. Discuss. [preprint], https://doi.org/10.5194/amt-2021-212, in review, 2021.
Based on the experience gained during the pre-launch activities in the Aeolus processor development and validation campaign programmes, DLR has conducted an intensive performance monitoring for the mission since its launch in 2018.
Already in the decade bevor, various ground and airborne pre-launch validation campaigns had been performed in support of the Aeolus operations and processor development. An integral part of the early success of the mission was the deployment of the ALADIN Airborne Demonstrator (A2D), a prototype of the ALADIN Doppler Wind Lidar (DWL) instrument on-board Aeolus. With its high technological commonality, it allowed to study Aeolus-specific lidar topics under various atmospheric conditions already before launch. In addition, the scanning, coherent 2-µm DWL acted as a high-accuracy reference by providing wind vector profiles for the A2D and Aeolus validation. Airborne measurements were performed with the two instruments operated in parallel on the DLR Falcon research aircraft in a downward looking configuration. The comprehensive operational, technological and algorithm knowledge established at DLR during these pre-launch activities laid the basis for a twofold Aeolus performance monitoring, implemented to support the mission.
Firstly, performance-relevant instrument parameters of the laser and receiver optics as well as the detector signals evolution for the internal reference and atmospheric path were monitored and reported on a regular basis. Additionally, special operations for tests of the Aeolus lasers and instrument alignment were supported, both by applying the tools derived during the pre-launch period and with newly developed performance indicators. The second monitoring activity was covered by four post-launch airborne validation campaigns with a focus on the Aeolus wind product. Deploying the two DWLs onboard the Falcon, coordinated flights along a total distance of 26,000 km under the Aeolus track were performed in different geographical regions and during the main operational phases of the mission until September 2021. The collocated A2D and 2-µm DWL wind observations not only reflect the evolution of the error characteristics during each mission state, but also provide valuable recommendations for the optimization of the Aeolus wind retrieval and related quality control algorithms.
This contribution gives an overview of the Aeolus in-orbit instrument performance evolution and related results of the four airborne validation campaigns.
We highlight some of the scientific benefits of the Aeolus Doppler Wind Lidar mission since its launch in August 2018. Its scientific objectives are to improve weather forecasts and to advance the understanding of atmospheric dynamics and its interaction with the atmospheric energy and water cycle. A number of meteorological and science institutes across the world have demonstrated that the Aeolus mission objectives are being met, despite the measurements being noisier than expected. Its wind product is being operationally assimilated by five Numerical Weather Prediction (NWP) centres, thanks to demonstrated useful positive impact on NWP analyses and forecasts. Applications of its atmospheric optical properties product have been found, e.g., in the detection and tracking of smoke from the extreme Australian wildfires of 2020 and in atmospheric composition data assimilation. The winds are finding novel applications in atmospheric dynamics research, such as for tropical phenomena (Quasi-Biennial Oscillation disruption events), Sudden Stratospheric Warming events, detection of atmospheric gravity waves, and in the smoke generated vortex associated with the Australian wildfires. It has been applied in the assessment of other types of satellite derived wind information such as atmospheric motions vectors. The successes of Aeolus will hopefully lead to the approval of an operational follow-on mission.
Fronts – the boundaries between water masses, in coastal and oceanic regions - are hotspots for rich and diverse marine life, and are also where floating marine debris tends to accumulate.
The main goal of this research is to develop a prototype risk index for the accumulation of marine plastic debris at fronts. In addition to mapping the risk areas for debris accumulation, we investigate their connectivity to the pathways into the ocean, through numerical dispersion models with high spatial resolution. In doing so, we aim to provide a tool to local and regional policymakers to identify areas where intervention would be more effective. As a case study, we are working in collaboration with local stakeholders in Da Nang, Vietnam.
The study presented here shows the analysis of satellite imagery for 2018 around the Da Nang area. First, we have used very high-resolution imagery from Worldview 3 to validate Sentinel-2 detections of accumulation of marine debris. To do so, we have compiled a dataset from the literature, from specific imagery in the area and from reports on social media. We have then selected Worldview 3 imagery and used it to compare with Sentinel-2 detection of accumulations from different categories. This comparison has been done using different distance metrics. Then we have compared the location of verified accumulations with different detection methods for fronts, to identify the matching spatial resolution scales of observation. We present preliminary results on detection of fronts from altimetry, from AVHRR sea surface temperature and from Sentinel-3 and Sentinel-2 optical data. Dynamics of fronts have been identified aligned with the monsoon variations in the area. In addition, a very high spatial resolution hydrodynamic model has also been implemented and preliminary results will be shown on how the particle tracking matches sources/pathways to fronts. Results will be discussed in terms of challenges and opportunities ahead.
In recent years, there has been an increasing interest in exploring capabilities of remote sensing technologies to monitor and detect marine litter, and in particular, plastic litter floating in our water bodies. Remote sensing is now considered as the solely tool able to provide regular information at global scale to inform about this problem (Maximenko et al, 2019), which is essential to constrain the Lagrangian transport models put in place to understand its dynamics (Maximenko et al, 2012; van Sebille, 2015). In 2020, the European Space Agency (ESA) launched the Discovery Campaign on Remote Sensing of Plastic Marine Litter, funding a total of 26 initiatives across a wide spectrum of areas. The “Windrows As Proxies” project (WASP), included in the initiative, aimed to prototype an operational processor capable of identifying filaments of floating marine debris using Copernicus Sentinel-2/MSI (S-2) images, with a high probability of containing marine litter, based in the existing oceanographic knowledge of their role as accumulating agents of marine debris and plastic litter (Cozar et al, 2021; Ruiz et al, 2020).
The work from Hu (2021) stressed that a direct plastic detection using S-2 data could fall out of the possible or being specially challenging, due to spectral mixing, lack of specific bands, and challenges in both SNR and expected spatial coverage. Moreover, Dr. Hu (personal communication) also raises questions about the potential separability between marine litter and other substances, like sea snot, with similar spectral signature in the range 480-900 nm. Contrary to other initiatives (Biermann et al, 2020; Kikaki et al., 2020), where several attempts have been made to directly classify plastic within the images at pixel level, WASP embraced the problem from a different perspective. Assuming that litter rarely appears as a separate end member of the spectral reflectance measured at each pixel of S-2 data, and considering that S-2 lacks of plastic litter-specific bands (Hu, 2021), the objective is to detect filaments of floating debris, which often contain significantly lager quantities of plastic litter than their surroundings. This detection by proxy also enables the performance of both spectral and contextual (object-based) detection, which leads towards a less ambiguous classification process, with no need of external sources of data.
In a first step, WASP developed a specific spectral index that explores the use of NIR and SWIR bands as main source of information for the classification. This is also a different solution than published, where most of referred algorithms employ techniques closer to Ocean Colour using VIS bands as main source, following the lead of exiting spectral indices like NDVI and FAI (Biermann et al, 2020; Kikaki et al., 2020). The advantage of using NIR/SWIR is that, such spectral region, are containing information relevant for the detection of plastic (Garaba et al, 2020) and floating organic matter, reason why no external data is needed to support the detection.
In this work, we explore the use of such bands in Sentinel-2 for the detection of these filaments of floating marine debris, as well as create a better understanding of the general challenges that detection of plastic litter from space has. Part of the work performed includes the generation of an ad hoc procedure for cloud detection and filtering, and the use of a deterministic contextual classifier for the detection of filaments within the image, using a multiscale approach.
The results of WASP consist on snippets of S-2 analysed data containing the detected filaments. Such snippets are manually supervised by operators, in order to discard known false positives. Validation of the detection method has been carried out with the support of information over artificial targets deployed at sea in Lesbos Island (Topouzelis et al, 2020). The results of the validation show the capability the method has to detect both plastic and floating debris with high organic load.
The work presented here has helped also to understand the roadmap and needs for improvement of the techniques, as well as to advance in the detection principles of plastic litter with its challenges, a cornerstone information for the definition of a specific future EO mission devoted to plastic litter monitoring.
Biermann, L., Clewley, D., Martinez-Vicente, V., & Topouzelis, K. “Finding plastic patches in coastal waters using optical satellite data”. Scientific reports, 10(1), p1-10, 2020.
Cózar, A., Aliani, S., Basurko, O. C., Arias, M., Isobe, A., Topouzelis, K., ... & Morales-Caselles, C. (2021). Marine litter windrows: a strategic target to understand and manage the ocean plastic pollution. Frontiers in Marine Science, 8, 98.
Garaba, S. P., Arias, M., Corradi, P., Harmel, T., de Vries, R., & Lebreton, L. “Concentration, anisotropic and apparent colour effects on optical reflectance properties of virgin and ocean-harvested plastics”. Journal of Hazardous Materials, 124290, 2020.
Hu, C. (2021). Remote detection of marine debris using satellite observations in the visible and near infrared spectral range: Challenges and potentials. Remote Sensing of Environment, 259, 112414.
Kikaki, A., Karantzalos, K., Power, C. A., & Raitsos, D. E. “Remotely Sensing the Source and Transport of Marine Plastic Debris in Bay Islands of Honduras (Caribbean Sea)”. Remote Sensing, 12(11), 1727, 2020.
Topouzelis, K., Papageorgiou, D., Karagaitanakis, A., Papakonstantinou, A., & Arias, M. “Remote Sensing of Sea Surface Artificial Floating Plastic Targets with Sentinel-2 and Unmanned Aerial Systems (Plastic Litter Project 2019)”. Remote Sensing, 12(12), p2013, 2020.
Maximenko, N., Hafner, J., & Niiler, P. “Pathways of marine debris derived from trajectories of Lagrangian drifters”. Marine pollution bulletin, 65(1-3), p51-62, 2012.
Maximenko, N., Corradi, P., Law, K. L., Van Sebille, E., Garaba, S. P., Lampitt, R. S., ... & Wilcox, C. “Toward the integrated marine debris observing system”. Frontiers in marine science, 6, 447, 2019.
Ruiz, I., Basurko, O. C., Rubio, A., Delpey, M., Granado, I., Declerck, A., ... & Cózar-Cabañas, A. “Litter windrows in the south-east coast of the Bay of Biscay: an ocean process enabling effective active fishing for litter”: Frontiers in marine science 2020.
Van Sebille, E., Wilcox, C., Lebreton, L., Maximenko, N., Hardesty, B. D., Van Franeker, J. A., ... & Law, K. L. “A global inventory of small floating plastic debris”. Environmental Research Letters, 10(12), 124006, 2015.
The remote sensing of marine litter is becoming increasingly important. Disasters such as floods cause large amounts of natural and man-made waste to enter coastal waters. Since such large-scale floating litter is an obstacle to safe operation of ships and recreation activities, it is necessary to issue an alert and remove it as quickly as possible. In addition, since man-made plastic litter adversely affects not only the aesthetics but also the marine ecosystem, interest in reducing plastic waste is higher than ever. Some of the plastic waste entered into the ocean and moves along the ocean current over a long period of time, threatening the life of marine animals. Numerous clean-up operations are being undertaken. Mapping areas of plastic litter accumulation in the ocean is certainly useful for efficient operations.
However, it is still challenging to reliably detect marine litter from space. One of the key technique to do is to detect changes in reflectance to identify small patches floating on the ocean surface. The spatial anomaly in reflectance was calculated by subtracting the spatially varying reflectance of the surrounding background water from the satellite-measured reflectance, and it was used to detect reflectance changes due to presence of drifting litter patches.
By applying this technique to high-resolution images of Worldview-3, we investigated the possibility of detecting plastic litter in Great Pacific Garbage Patch. Specifically, anomaly spectra were evaluated to detect plastic litter using the Ocean cleanup System 001 Wilson as a known target. While floating litter in the open ocean is found isolated, it is often observed accumulated along the water fronts. We applied the same technique to different-scale events in coastal waters using images from various satellite sensors including PlanetScope, MSI and GOCI. In addition to the change of reflectance due to the presence of litter, the reflectance spectrum of the target litter itself is very important information to identify the kind of litter, which will be discussed in this presentation.
Introduction
Ocean pollution is growing into a serious threat not only to the ecosystem and marine species but also to human life. In fact, significant quantities of anthropogenic trash, especially plastic, keeps getting discarded each year in the ocean mostly via rivers during extreme weather events. The pollutants could stay for long years in the ocean and cause long-term harm. For instance, plastic's degradation could take more than 400 years to happen, which is why it is very important to act rapidly to tackle the issue of marine pollution. Initiatives across the world such as the UN Sustainable Development Goal 14 and the EU Marine Strategy Framework Directive's descriptor 10, encourage improving the ocean's health.
A remarkable increase in methods to detect plastic and floating objects in general can be witnessed in the last few years and months (Topouzelis et al.,2019; Biermann et al., 2020; Mifdal et al., 2021). Spotting plastic can be very hard and expensive. In fact, most of the current monitoring methods rely on drones and/or coastal cameras which is not always affordable and convenient. Thus, an increasing amount of research is shifting the attention on the use of satellite data for floating targets monitoring. Currently, the most convenient satellite data is Sentinel-2 thanks to its global coverage and the public availability of its data, thus, it offers the possibility to monitor large-scale areas across the globe. Sentinel-2 provides images with thirteen or twelve bands, four bands have the spatial resolution of 10 m and the remaining ones have a 20 m and 60 m spatial-resolution. In this setting, only big patches of objects can be detected using Sentinel-2 data. Thanks to different natural processes such as wind, waves etc, floating debris are agglomerated and form patches that could be spotted on Sentinel-2 images. Thus, our goal is to focus on the detection of agglomerated floating targets in water bodies and provide a map of the detected objects which could be useful for other downstream tasks such as the detection of plastic, ocean clean-up operations, automating the monitoring of ocean pollutants etc. Recently, Sentinel-2 data along with deep learning methods proved effective for the detection of floating targets in water bodies (Mifdal et al., 2021). The authors hand-labeled floating patches on Sentinel-2 images based on the FDI (Biermann et al., 2020) and NDVI indices. Then, after running neural networks on the labeled datasets, the authors concluded from the results that the neural networks learnt efficiently the spatial patterns of the floating objects and were able to detect them with competitive accuracy. The labeled Sentinel-2 dataset along with the learnt weights are publicly available.
Contribution
Despite the results in (Mifdal et al., 2021), the detection of floating objects on water bodies remains a difficult problem and many issues could impact the network's performance. For example, on Sentinel-2 image of a coastal area, there are more pixels belonging to water than to the floating objects themselves, this creates an imbalanced learning problem. Also, the variability of regions across the world makes it hard sometimes for the network to spot the floating objects, and finally, the labels suffer from noise which could be misleading to the network during the learning phase. Thus, the goal of our contribution is to make the previous networks more robust to the domain adaptation problem and noisy labels by focusing on the utilization of self-supervised learning (SSL) while using attention mechanisms. More exactly, we use contrastive SSL as it is a simple and modular approach. The latter relies on a "pretext training task", defined by the user, that helps the network to learn invariances and latent patterns in the data without the need of any human annotation. To do so, the neural network forms positive pairs by considering various augmented views of a given sample and then maximizes their similarity through the noise contrastive estimation (NCE) loss. This emerging and yet promising paradigm increases model's robustness towards noise or frugality (Cio-carlan and Stoian, 2021), and representation learnt this way are known to have a better semantic and contextual dimension and, therefore, to better disentangle the main triggers of the network's decisions. Also, the conclusions of (Carmo et al., 2021) [1] emphasized the superiority of the MA-Net (U-Net with attention modules) with respect to U-Net, which encourages the use of attention mechanisms, especially transformers encoders. Indeed, despite the power of CNNs to extract meaningful features and textures, the spatial context and objects shape are less taken into account during the learning step when comparing with transformers encoders (Tuli et al., 2021). The lack of spatial information limits the global comprehension of the image and also the deep neural network's ability to identify and distinguish floating debris from lands or ships. Combining SSL with methods based on attention mechanisms altogether has shown a significant improvement in performances, especially in domain issues and generalization to challenging settings.
References
[1] https://210507-004.oceansvirtual.com/view/content/skdwP611e3583eba2b/ecf65c2aaf278557ad05c213247d67a54196c9376a0aed8f1875681f182daeed
L. Biermann, D. Clewley, V. Martinez-Vicente, andK. Topouzelis. Finding plastic patches in coastalwaters using optical satellite data.Scientific re-ports, 10(1):1–10, 2020.
A. Ciocarlan and A. Stoian. Ship Detection in Sen-tinel 2 Multi-Spectral Images with Self-SupervisedLearning.Remote Sensing, 13(21):4255, Oct. 2021.ISSN 2072-4292. doi: 10.3390/rs13214255. URLhttps://www.mdpi.com/2072-4292/13/21/4255.
J. Mifdal, N. Longepe, and M. Rußwurm. Towardsdetecting floating objects on a global scale withlearned spatial features using sentinel 2.ISPRSAnnals of the Photogrammetry, Remote Sensingand Spatial Information Sciences, 2021.
K. Topouzelis, A. Papakonstantinou, and S. P.Garaba. Detection of floating plastics from satelliteand unmanned aerial systems (plastic litter project2018).International Journal of Applied Earth Ob-servation and Geoinformation, 79:175–183, 2019.
S. Tuli, I. Dasgupta, E. Grant, and T. L. Griffiths.Are Convolutional Neural Networks or Transform-ers more like human vision?arXiv:2105.07197 [cs],July 2021. URLhttp://arxiv.org/abs/2105.07197. arXiv: 2105.07197.
Each year, several million metric tons of mismanaged plastic waste eventually enter the ocean from coastal environments. Once in the ocean, positively buoyant plastic debris are subjected to a wide range of physical transport processes such as coastal currents, large-scale and submesoscale open ocean processes, stokes drift, direct wind transport, vertical mixing, and beaching. These processes result in the dispersion of floating plastic objects over large distances globally, leading to particularly high concentrations in the surface ocean of remote subtropical oceanic gyres. With amounts of plastic exceeding one million pieces per km2 and hundreds of kilograms per km2, these waters are often referred to as ocean garbage patches. At present, the density and spatial variability of floating macroplastic (> 50 cm) in the subtropical gyres is still poorly understood, primarily due to limited observational tools. However, such information is crucial to more accurately quantify the global inventory of plastic debris in the world’s oceans and monitor its evolution, which in turn represents critical data for ecotoxicological risk assessments and for optimizing mitigation strategies.
This study uses fixed-wing Unmanned Aerial Vehicles (UAVs) to obtain photo grid surveys of the ocean surface in the North Pacific subtropical gyre during a 6-week offshore expedition in July-August 2021. We scanned almost 100 km2 of ocean surface in 22 flights. Plastic density maps, created by applying our previously developed object detection model (de Vries et al., 2021), reveal large daily fluctuations of background density, in addition to strongly clustered floating macroplastic (> 50 cm) within the area scanned by each flight. The UAV campaign results provide a dataset that sparks new insights into the plastic density in the North Pacific subtropical gyre, and that can aid in refining advection models specifically for floating macroplastic (> 50 cm).
Plastic litter and debris are now found all over the globe, from remote plains and mountains to estuarine systems and ocean waters. In the aquatic environment, plastic litter is fractionated into smaller sizes (nano or micro-plastics, diameter < 5 mm) and undergoes biogeochemical modifications through biofouling or incorporation into organic polymers. The diversity of plastic compounds and highly variable modifications occurring in the water column make it very difficult to have a systematic approach for documenting and classifying the optical properties of plastic litter from the field or laboratory. Plastic compounds are commonly birefringent and/or semi-transparent. As a result, knowledge of their polarization characteristics might be an asset for plastic detection and monitoring from spectro-polarimetric sensors.
In this study, full single scattering and radiative transfer computations were conducted to model the polarized water-leaving radiation of submerged plastic particles based on empirical size distributions and refractive indices. Simulations were also performed for coated plastic particles to mimic biofouling effects. The radiance and other Stokes vector terms (i.e., Q and U) were simulated at several levels in the atmosphere- water column system including top-of-atmosphere. The impacts of plastic compound, size and concentration were analyzed in terms of the linear degree of polarization and angle of polarization. This analysis was performed for several wavelengths in the visible-near-infrared part of the spectrum and for a comprehensive set of viewing geometries and Sun elevations. These results enabled the characterization of polarization signatures as measured from polarimetric sensors located directly in the water column, just above the water surface, or at the satellite level. Finally, the detectability of plastics from space with past, present, and future polarimetric satellite missions, including the PACE mission, was quantified and methods for field validation were delineated.
Carbon exchanges between the surface and the atmosphere mediated by plant photosynthesis are a key component of the terrestrial carbon balance, as photosynthesis is the mechanism for atmospheric carbon uptake by terrestrial vegetation and assimilation at the surface as accumulated biomass. Vegetation photosynthesis is highly variable according to environmental factors, particularly when plants are exposed to variable stress conditions, and such variability is enhanced under climate change and human pressure. A better quantitative estimation of the actual carbon assimilation by plants is needed to improve the accuracy of current terrestrial carbon models and to improve the predictability of such models towards future scenarios.
While Earth Observation (EO) techniques have been used extensively to model and quantify terrestrial vegetation dynamics, by determining the structure and functioning of plants, a direct estimate of actual photosynthetic activity by vegetation is only possible by quantifying not only the actual absorbed light by plants but also how the absorbed light is internally used by the plants. In fact, only a fraction of such absorbed light is used for photosynthesis, but such fraction is highly variable in space and time as a function of environmental conditions and regulation factors. Measuring chlorophyll fluorescence together with the total absorbed light, in addition to other key plant variables, provides a unique opportunity to quantify vegetation photosynthesis and its spatial and temporal dynamics by satellite observations.
The Fluorescence Explorer (FLEX) mission was selected in 2015, by the European Space Agency (ESA), as the 8th Earth Explorer within the Living Planet Programme, with the key scientific objective of a quantitative global mapping of actual photosynthetic activity of terrestrial ecosystems, with a spatial resolution adequate to resolve land surface processes associated to vegetation dynamics. FLEX will also provide quantitative physiological indicators to account for vegetation health status and environmental stress conditions, to better constraint global terrestrial carbon models.
Spatial coverage is driven by the need to globally observe all terrestrial vegetation, while optimal observation time around 10:00 is driven by the diurnal cycle of photosynthetic processes. The spatial resolution of 300 m is driven by the need to resolve land processes at appropriate scales relevant for the identification and tracking of stress effects on terrestrial vegetation, covering at least several annual cycles. The targeted uncertainty for photosynthesis derived from instantaneous measurements ranges from about 5% in unregulated conditions up to 30% in case of highly variable regulated heat dissipation, in line with model requirements and improving current capabilities provided by other techniques.
Given the four main pathways for light absorbed by plants (photochemistry, constitutive heat dissipation, chemically regulated heat dissipation and fluorescence) FLEX measurements include not only fluorescence emission, but also vegetation temperature and estimates of regulated heat dissipation, which is highly variable and drives the relation between fluorescence and photosynthesis. In addition, FLEX has also to acquire all the necessary information to determine vegetation conditions to properly interpret the photosynthesis variability, and all the necessary information needed for appropriate cloud screening, compensation for atmospheric effects and proper analysis of the measured signals.
For the retrieval of vegetation fluorescence, very high spectral resolution (better than 0.3 nm) is needed, with also very high signal-to-noise ratio given the weakness of the fluorescence signal as compared to the background reflected radiance. To be able to accomplish such objective, the FLEX mission carries the FLORIS spectrometer, with a spectral sampling in the order of 0.1 nm, specially optimized to derive, spectrally resolved, the overall vegetation fluorescence spectral emission in the full range 650-780 nm, and also measuring the spectral variability in surface reflectance in the range 500-650 nm indicative of chemical adaptations in regulated heat dissipation.
FLEX is designed to fly in tandem with Copernicus Sentinel-3. Together with FLORIS, the OLCI and SLSTR instruments on Sentinel-3 provide all the necessary information to retrieve the emitted fluorescence, and to allow proper derivation of the spatial and temporal dynamics of vegetation photosynthesis from such global measurements. OLCI and SLSTR data also help in the compensation for atmospheric effects and the derivation of the additional biophysical information needed to interpret the variability in fluorescence measurements.
The science products to be provided by the FLEX mission are not restricted to the basic chlorophyll fluorescence measurements, but include also the estimates of regulated and non-regulated heat dissipation, needed to quantify actual photosynthesis. Together with canopy temperature and other variables characterizing vegetation status, such as chlorophyll content and fraction of light absorbed by photosynthetic pigments, Level-2 products include instantaneous photosynthesis rates and estimates of vegetation stress based on ratios between actual versus potential photosynthesis and variable PSI/PSII contributions tracking photosynthesis dynamics. Level 3 products are derived by means of spatial mosaics and temporal composites, giving also as a temporal product the activation / deactivation of photosynthesis, growing season length and related vegetation phenology indicators. Finally, by means of data assimilation into advanced dynamical models of FLEX time series and ancillary information, Level-4 products are also provided, including time series of Gross Primary Productivity (GPP) and more advanced dynamical vegetation stress indicators.
Such science products can be directly used by vegetation dynamical models, climate models and different applications. In particular, with the increase population and food demand, usage of agriculture will need optimization of crop photosynthesis in such variable conditions, and FLEX is expected to contribute not only to the carbon science but also in associated applications. Efforts are put in place to guarantee proper cal/val activities and dedicated validation network for FLEX products. Particular efforts are in place to provide each product with realistic and properly estimated uncertainties, and also to propagate the derived uncertainties from the original satellite data until the final high-level products, accounting in each step for the covariance matrices associated to each set of intermediate variables, and using a combination of Montecarlo and analytical error propagation tools along the whole FLEX data processing chain.
The availability of validated ready-to-use high level science products will allow an extensive scientific usage of FLEX data, with also a high potential for derived applications. The open availability of FLEX products, and the accessibility through open exploitation platforms where users can themselves validate the products or even make changes in the algorithms to optimize the products towards their specific needs, is intended to offer a large versatility in FLEX exploitation approaches. FLEX is also expected to be used in conjunction with many other sources of EO data, including high spatial resolution time series from Sentinel-2. The FLEX Level-2 products are already provided in the same geographical grid as Sentinel-2 products to facilitate such exploitation developments. Usage of common global multi-resolution spatial grids for high-level products, and compatibility of data formats, are also taken into account to maximize the inter-operability of FLEX products with other EO products in global data assimilation approaches and multiple applications.
FLEX (FLuorescence EXplorer) is the 8th Earth Explorer mission currently being developed by ESA with the objective to perform quantitative measurements of the solar induce vegetation fluorescence. It will advance our understanding of the functioning of the photosynthetic machinery and the actual health of terrestrial vegetation. The fluorescence signal measured from space is so faint that additional information is required for an accurate retrieval and interpretation of the vegetation Fluorescence emissions. Hence the FLEX satellite will fly in convoy with a Sentinel-3 satellite for close temporal coregistration of its OLCI and SLSTR measurements.
The FLEX project development started in 2016 with the technologically challenging FLORIS Instrument by the instrument contractor Leonardo (Italy) with OHB (Germany) in its core team. The platform development was then kicked-off in 2019 with ThalesAleniaSpace (France) as the overall satellite prime contractor. A major milestone has been achieved by completing the instrument and satellite Critical Design Reviews beginning of 2022, allowing now to move into the phase of flight hardware manufacturing and integration.
An overview of the development progress will be provided including an outlook of the future project activities to get ready for launch in 2025.
The FLEX instrument Performance Simulator (FIPS) is a software allowing the simulation of synthetic raw data representative of the FLEX instrument radiometric, spectral and geometric performances. The FIPS simulates the optical performance of the instrument telescope and the two spectrometers. Particular emphasis was put in simulating the straylight performance of the instrument. The full acquisition chain from detectors to onboard data generation is also simulated allowing the generation of instrument source packets.
The Ground Prototype Processor (GPP) processes both synthetic and real instrument Earth Observation data, from the instrument source packet up to the Level 1B user product. The processing includes dark signal removal, smearing correction, non-linearity correction, straylight correction, absolute radiometric calibration and flat field equalization. The resulting Level 1B product includes geolocated top-of atmosphere radiances, associated data quality information, meteorological data and instrument characteristics required for further processing of the data to the Level 2.
In addition, the GPP is also designed to process data from the instrument whilst operating in various calibration modes. This functionality will enable in-flight characterization and calibration of the instrument.
The instrument radiometric calibration will be performed in-flight using a sun diffuser. It will be further monitored and validated using regular observations of the moon and deep convective clouds. The non-linearity of the instrument detector chain will be characterized on ground and then verified in-flight using natural targets at various level of signal and associated instrument integration times.
The in-flight spectral characterization will be based on the measurements of atmospheric absorption features as well as solar absorption lines observed on the onboard sun diffuser.
The absolute geometric performance will be monitored and corrected for through spatial feature matching of nominal EO data with a database of georeferenced high spatial resolution (30 m) Landsat images. The spatial coregistration between the high resolution and low spectral resolution spectrometers will be ensured by spatial feature matching between the two spectrometers data.
ESA’s 8th Earth Explorer mission FLEX aims at mapping Sun-induced fluorescence (SIF) as a proxy to quantify photosynthetic activity of terrestrial vegetation. The mission consists of a single platform (FLEX) carrying a hyperspectral imaging spectrometer (FLORIS). Flying in tandem with Copernicus’ Sentinel-3 satellite, FLEX will exploit the synergies with OLCI and SLSTR instruments. The complexity of the FLEX mission concept, its stringent mission requirements, and the large data volume impose important challenges to the Level-2 processing algorithms.
Within ESA/ESTEC FLEX Level-2 Study, a consortium made of remote sensing scientists and industry is joining forces to tackle these challenges, developing a Level-2 data processing prototype that can accurately retrieve SIF emission from space. The Level-2 data processing chain consists in four modules:
Level-1C: creating a synergistic product that includes co-registered FLEX and Sentinel-3 instrument data, quality flags and pixel classification, as well as a refined FLORIS spectral/radiometric calibration.
Level-2A: characterizing the atmospheric conditions (water vapor and aerosol optical properties) with a state-of-the-art synergistic approach using Sentinel-3 and FLEX data. This module provides an accurate inversion of the surface (apparent) reflectance within 1% error.
Level-2B: deriving the full spectrum of SIF emission from terrestrial vegetation by an efficient spectral fitting method within 0.2 mW/m2/sr/nm error.
Level-2C: providing vegetation biophysical parameters and a measurement of the non-photochemical quenching in order to interpret the SIF signal and its link to vegetation photosynthesis.
The FLEX Level-2 Study just finished its 3rd year of activities with a fully implemented software that it is functionally and scientifically validated. The current validation results show high accuracy of Level-1C calibration, atmospheric correction and subsequent SIF retrieval. In this presentation we will give an overview of the project activities and the current status of the developed algorithms, validation results and on-going challenges. The goal is to promote discussion with the scientific community and industry to consolidate the Level-2 mission products and data processing algorithms. In this presentation, we will describe the entire Level-2 data processing chain, and demonstrate the current accuracy of the implemented algorithms through the latest performance assessment results.
FLEX-E is the FLEX Level-2 end-to-end mission performance Simulator for ESA’s FLEX Earth Explorer-8. It is a key tool to demonstrate the feasibility of the whole FLEX mission concept and the baseline for the mission Ground Segment. It is also a versatile scientific tool, allowing to simulate the variability of ground, atmospheric and observation conditions that the FLEX – Sentinel-3 tandem mission will face, as well as to test the impact, on the L2 scientific products, of the constraints imposed by the technical solutions adopted for the FLEX instrument/platform.
FLEX-E has a modular architecture, based on the concept defined in ESA’s ARCHEO-E2E project. All FLEX-E modules are integrated within the OpenSF generic simulation framework. The “core” is the Scene Generation Module (SGM), designed to provide reference scenes in ground coordinates, with subpixel resolution, in a consistent manner for all the FLEX / Sentinel-3 tandem mission instruments. The SGM allows the different instruments to “fly” over the same scene (with different viewing geometries and spectral configurations), as it will be the case in real FLEX operations over a given geographical area. The interaction of the incoming solar radiation with vegetation and soils, and its propagation up to the Top of the Atmosphere (TOA), is performed with two coupled radiative transfer codes: SCOPE and MODTRAN. Through this, the SGM is able to generate the TOA radiance hypercube that is the input of the FLORIS and Sentinel-3 instrument modules. The orbit, attitude and observation geometry, needed for the scene generation, is provided by two Geometry Modules (GM and S3G), for the FLORIS and Sentinel-3 spectrometers, respectively.
The FLORIS instrument module, developed in a parallel project, comprises two chained submodules. The FLEX Instrument Performance Simulator (FIPS) models first the conversion of TOA radiances to digital numbers, including all the instrument spatial, spectral and radiometric effects and errors. Then, the Ground Processor Prototype (GPP) generates the L1b data products from the raw data, by implementing the calibration and correction of systematic errors. For the Sentinel-3 data flow, the Instrument+L1 Processing Module (S3M) models a simplified S3 sensors’ behaviour (OLCI and SLSTR), both in the spatial and spectral domain, and the L1 Ground Processing for the generation of Level-1 products.
At the end of the chain, the Level-2 Retrieval Module (L2RM), also developed in a parallel project, ingests the L1b inputs from GPP and S3M and implements all the retrieval algorithms for the FLEX L2 products, including Top-of-canopy reflectance, Sun-Induced Chlorophyll Fluorescence (SICF) and high-level photosynthesis products.
The Mission Performance Assessment is finally done by comparing the L2 products with the L1b data and the reference data produced by the SGM assessing the performance with respect to the Mission requirements provided in MRD document. This is achieved through two dedicated Performance Assessment Modules (PAM) for L1 and L2 mission requirements’ evaluation, and a parallel generic PAM configured for FLEX’s MRD. FLEX-E allows the independent evaluation of the impact of different instrumental configuration and effects, as well as “real world” scenarios (ground, atmosphere, observational) in the L2 retrieval.
The overall design and architecture of FLEX-E is presented, together with the status of its implementation and the latest results of the FLEX mission performance assessment.
In 2025 the European Space Agency (ESA) will launch the FLuorescence EXplorer (FLEX) mission, which will provide global maps of vegetation fluorescence that can reflect photosynthetic activity and plant health and stress. In turn, this is not only important for a better understanding of the global carbon cycle, but also for agricultural management and food security. FLEX will fly in tandem with the Copernicus Sentinel-3 mission, in particular working in combination with the OLCI and SLSTR instruments Sentinel-3 carries.
FLEX being in its implementation phase, the preparations for the later product validation started by means of mainly ground and airborne instrumentation. The gathered data provide fundamental information about the underlying processes and even more important confidence of data products and their required uncertainties. One challenge in this context is a comprehensive understanding and characterization of measurement uncertainty of the proposed validation datasets and the spatial and temporal support or representativity of these.
These data also form the basis for defining future validation activities and identifying potential key Cal/Val issues.
We will provide an overview of the future validation strategies for FLEX and how these integrate into a broader validation strategy for the Sentinel-3 and FLEX tandem and earth observation science strategy for the carbon cycle. In addition, we will highlight recent activities and outline planned activities for the coming years.
Studies of land-cover and land-use change (LCLUC) on a global scale became possible when the first satellite of Landsat series was launched 50 years ago. Since then, land-change science has been rapidly developing to answer the questions on where changes are occurring, what is their extent and over what time scale, what are their causes, their consequences for ecosystems and human societies, their feedbacks with climate change, and what changes are expected in the future. LCLUC studies use a combination of space observations, in situ measurements, process studies and numerical modeling. To get the most out of current remote sensing capabilities researchers strive to utilize data sources, different in space and time resolution and in electromagnetic range. Fusing observations from optical sensors with radar data helps filling the cloud-induced gaps in optical data. The goal is to develop multi-sensor, multi-spectral methods to increase the spatiotemporal coverage and to advance the virtual constellation paradigm for moderate spatial resolution (10-60m) land imaging systems with continental to global scale coverage. Also, the use of commercial satellite very high (meter) resolution is accelerating with more data becoming available and accessible. On the other hand, socioeconomic research plays an important role in land-change science and includes analyses of the impacts of changes in human behavior at various levels on land use. Studies of the resultant impacts of land-use change on society, or how the social and economic aspects of land-use systems adapt to climate change are becoming more and more important as the climate crisis issues draw increasing attention.
The NASA LCLUC Program is developing interdisciplinary approaches combining aspects of physical, social, and economic sciences, with a high level of societal relevance, using remote sensing tools, methods, and data. The Program aims at developing the capability for annual satellite-based inventories of land cover and land use to characterize and monitor changes at the Earth’s surface to improve our understanding of LCLUC as an essential component of the Earth System. The Program currently focuses on detecting and quantifying rapid LCLUC in hotspot areas and examining their impact on the environment and interactions with climate and society. This talk will summarize the Program’s achievements during the 25 years since its inception with an emphasis on the most recent findings. It will describe the synergistic use of multi-source land imaging data including those from the instruments on the International Space Station. The examples will cover various land-cover and land-use sectors: forests, grasslands, agriculture, urban and wetlands.
The existing CCI Medium Resolution land cover (MRLC) product delineates 22 primary and 15 secondary land cover classes at 300-meter resolution with global coverage and an annual time step extending from 1992 to the present. Previously, translation of the land cover classes into the plant functional types (PFTs) used by Earth system and land surface models required the use of the CCI global cross-walking table that defines, for each land cover class, an invariant PFT fractional composition for every pixel of the class regardless of geographic location.
Here, we present a new time series data product that circumvents the need for a cross-walking table. We use a quantitative, globally consistent method that fuses the 300-meter MRLC product with a suite of existing high-resolution datasets to develop spatially explicit annual maps of PFT fractional composition at 300 meters. The new PFT product exhibits intraclass spatial variability in PFT fractional cover at the 300-meter pixel level and is complementary to the MRLC maps since the derived PFT fractions maintain consistency with the original land cover class legend. This was only possible by ingesting several key 30m resolution global binary maps like the urban, the open water, the tree cover, the tree height while controlling their compatibility thanks to the MRLC maps.
This dataset is a significant step forward towards ready-to-use PFT descriptions for climate modeling at the pixel level. For each of the 29 years, 14 new maps are produced (one for each of 14 PFTs: bare soil, surface water, permanent snow and ice, built, managed grasses, natural grasses, and trees and shrubs each split into broadleaved evergreen, broadleaved deciduous, needleleaved evergreen, and needleleaved deciduous), with data values at 300-meter resolution indicating the percentage cover (0–100%) of the PFT in the given year.
Based on land surface model simulations (ORCHIDEE and JULES models), we find significant differences in simulated carbon, water, and energy fluxes in some regions using the new PFT data product relative to the global cross-walking table applied to the MRLC maps. We additionally provide an updated user tool to assist in creating model-ready products to meet individual user needs (e.g., re-mapping, re-projection, PFT conversion, and spatial sub-setting).
Driven by advancements in capabilities of satellite data acquisition and processing and continued interests in monitoring the earth’s surface for a variety of needs, global land cover (GLC) mapping efforts have seen accelerated progress. As such, several GLC maps have been produced with increased temporal resolution containing annual updates and with increased spatial resolution e.g., at 10m resolution. However, the validation of GLC maps has not kept up the same pace as the map generations. Most GLC maps are validated using statistically rigorous accuracy assessment methods following internationally promoted guidelines (CEOS Stage-3). Still, updates (e.g. annual or per epoch) on GLC maps often lack rigorous accuracy assessments. Considering that validation datasets are collected using human interpretation, which is costly and time-consuming, validation datasets should be designed to be easily adjustable for timely validation of new releases of land cover products and also be suitable for assessing multiple maps.
Aiming towards operational land cover validation, this study presents a framework for operational validation of annual global land cover maps using efficient means for updating validation datasets that allow timely map validation according to recommendations in the CEOS Stage-4 validation guidelines (Figure 1)(Tsendbazar et al. 2021). The framework includes a regular update of a validation dataset and continuous map validation. For the regular update of a validation dataset, a partial revision of the validation dataset based on random and targeted rechecking (areas with a high probability of change) is proposed followed by additional validation data collection. For continuous map validation, an accuracy assessment of each map release is proposed including an assessment of stability in map accuracy targeting users that require multi-temporal maps.
This validation framework was applied to the validation of the Copernicus Global Land Service GLC product which includes annual GLC maps from 2015 to 2019. We developed a multi-purpose global validation dataset that is suitable for validating maps with 10-100m resolution for the reference year 2015 (Tsendbazar et al. 2018). As part of the operational validation, this dataset was updated to 2019 based on partial revision consisting of random and targeted revisions. The BFAST time series algorithm was used to target sample locations that are possibly changed during the update period. Additional sample sites were also collected to increase the sampling intensity in the land cover change areas.
Through this updating mechanism, we validated the annual GLC maps of the CGLS-LC100 product for 2015–2019. We further assessed the stability in class accuracy over this period. Implementation of this operational validation framework in the context of the Copernicus Global Land Service GLC product will be presented.
Since the validation dataset is a multi-purpose validation dataset that allows validating maps with 10-100m resolution, this dataset was further updated to the year 2020 to validate ESA’s WorldCover 2020 GLC map which is based on Sentinel 1 and Sentinel 2 data at 10m resolution. The approach and results of validating the WorldCover 2020 GLC map will also be included in this presentation.
As more operational land cover monitoring efforts are upcoming, we emphasize the importance of updated map validation and recommend improving the current validation practices towards operational map validation so that long-term land cover maps and their uncertainty information are well understood and properly used.
Index Terms— land cover validation, operational monitoring, dataset update, global land cover
Related literature
Tsendbazar, N., Herold, M., Li, L., Tarko, A., de Bruin, S., Masiliunas, D., Lesiv, M., Fritz, S., Buchhorn, M., Smets, B., Van De Kerchove, R., & Duerauer, M. (2021). Towards operational validation of annual global land cover maps. Remote Sensing of Environment, 266, 112686
Tsendbazar, N.E., Herold, M., de Bruin, S., Lesiv, M., Fritz, S., Van De Kerchove, R., Buchhorn, M., Duerauer, M., Szantoi, Z., & Pekel, J.F. (2018). Developing and applying a multi-purpose land cover validation dataset for Africa. Remote Sensing of Environment, 219, 298-309
1. Introduction
Land cover is one of the main environmental climate variables (ECVs) as it is highly correlated with climate change. In this context, in the framework of the Climate Change Initiative (CCI) of ESA [1], the High Resolution Land Cover (HRLC) project is aimed to study the role of the spatial resolution in the mapping of land cover and land-cover changes to support climate modelling research [2]. Land cover and related changes are indeed both cause and consequence of human-induced or natural climate changes. This has been demonstrated by the previous phase of the CCI program, focused on the generation of Medium Resolution (MR) Land Cover maps at global scale. Differently from the MR land cover CCI, which provided annual land cover maps at 300m resolution in the period 1992-2020 [3], the HRLC project produces regional maps characterized by a spatial resolution of 10m/30m. Moving from 300m to 30m requires the definition of new data analysis methods, reframing the perspective with respect to the MR project both from the theoretical and the operational viewpoints. Although HR potentially increases the capability of a detailed analysis of spatial patterns in the land cover, many challenges are introduced with respect to the MR case and limitations in the available data make the development of products at very large scale very challenging.
This contribution presents the architecture and the methodologies developed for implementing the full processing chains that have been developed to process Earth Observation (EO) data and generate the HRLC products. The primary products of the project consist of: (i) HR land-cover maps at subcontinental scale at 10m as reference static input (generated for 2019 only) to the climate models, (ii) a long-term record of regional HR land cover maps at 30m in the regions identified for the historical analysis every 5 years (generated in the period 1990-2015), and (iii) change information at 30m at yearly scale consistent with historical HR land-cover maps.
2. Methodology
The development of the proposed architecture was based on the observation that temporal availability of HR data in the past/current archives is much lower than that of the MR ones and strongly varies across the years. Differently from the MR case (e.g., SPOT-Vegetation archive), no daily acquisitions are available and only in the very recent years it was possible to get a quite dense temporal sampling due to Sentinel and Landsat-8 missions. Prior to them, the number of yearly-based images available in archives dramatically reduces (being Landsat Thematic Mapper, ASAR and ERS-1 and 2 the most relevant data sources), resulting in a much more challenging problem for the development of HRLC products. This scenario led to a complex process to produce historical time series of products. Moreover, it required a shift in the processing paradigm that moves from the analysis of many images per year acquired at MR to a few images (for some areas and years single or no images are available) characterized by high spatial resolution.
To produce the land-cover maps, two multisensor (optical and SAR) processing chains have been designed and implemented: one is based on the exploitation of Sentinel 1 (S1) and Sentinel 2 (S2) images for the generation of maps at 10 m resolution (used in the project for generating products in 2019) (Figure 1) and the other one generates historical maps every 5 years going back to 1990 by exploiting Landsat (Enhanced) Thematic Mapper images and ASAR and ERS-1/2 data (Figure 2). Both architectures share two pre-processing branches (one for optical and the other for SAR data) and a fusion module for the final map production. The main difference between the two processing chains is related to the pre-processing techniques (which consider the large differences in data quality and availability between Sentinel and previous missions) and in the paradigm exploited for the classification mechanism. The S1/S2 architecture classifies independently the time series of images acquired in the target year (in the project 2019) and generates the land-cover products by fusing the classification results obtained independently on the two branches (optical and SAR) by using consensus theory and Markov Random Field approaches [4]. The historical processing chain assumes as baseline the classification results generated with S1/S2 data and exploits the cascade classification paradigm [5] to properly model the temporal correlation between images when producing the historical land-cover maps. This is done to mitigate the well-known problem of error propagation in multitemporal classification, which is extremely critical when independent classification of multitemporal data is performed. The cascade classification paradigm is robust and theoretically well founded as it is based on the Bayesian decision theory. The classification techniques included in the optical and SAR branches include “shallow” machine learning techniques (Support Vector Machines, Random Forest) and specific SAR detectors focused on built -up and water related classes [6]. Both architectures provide in output uncertainty measures for the classification of each pixel in the map and also indications on the second alternative class for a better representation of the real complex conditions on the ground. These are crucial information to be given as input to the climate modelling task when using the generated products. Specific methodologies have also been devised to support the definition of the training sets to be used for the 2019 and the historical image classifications [7].
To produce land-cover change maps every year, a third architecture has been defined (figure 3) that is driven by the cascade classification output and is aimed at identifying the location in time of the changes on a yearly base. This allows to localize in time the changes occurred between 5 years maps. The change detection products have been generated by using optical data acquired by Landsat (Enhanced) Thematic Mapper. The main challenge is related to the very uneven distribution of data available in different areas and in different years. This was addressed by defining an architecture based on a feature extraction module, a time series regularization module (based on a “shallow” neural network) and an abrupt change detection module [8]. The change detection products have associated reliability information for each pixel in terms of the probability of change.
3. Product generation and conclusion
The processing chain has been developed according to the use of dockers and with the requirement to be able to process big data volumes of optical and SAR images. Processors have been fully integrated in Python-based pipelines that automatically retrieve the needed products for the specific task and perform the processing. The production has been run on Amazon Web Services (AWS) cloud computing, even if the processing chain is flexible and can be run on DIAS and other cloud infrastructures.
The Climate User Group involved in this project defined three large regions of particular interest to study the climate/LC feedbacks in three continents involving climate (tropical, semi-arid, boreal) and complex surface atmosphere interactions that have significant impact not only on the regional climate but also on large-scale climate structures. The three regions are in Amazon basin, the Sahel band in Africa and in the in the northern high latitudes od Siberia.
The products generated on the three areas have accuracies that given the complexity of the task and of the legend of land covers classes (which includes also seasonal classes) are satisfactory (see the “ESA CCI High Resolution Land Cover Products” presentation).
References
[1] ESA – European Space Agency: ESA Climate Change Initiative description, EOP-SEP/TN/0030-09/SP, Technical Note – 30 September 2009, 15 pp., 2009.
[2] L. Bruzzone et al, "CCI Essential Climate Variables: High Resolution Land Cover,” ESA Living Planet Symposium, Milan, Italy, 2019.
[3] P. Defourny et al (2017). Land Cover CCI Product User Guide Version 2.0. [online] Available at: http://maps.elie.ucl.ac.be/CCI/viewer/download/ESACCI-LC-Ph2-PUGv2_2.0.pdf
[4] D. Tuia, M. Volpi, G. Moser, “Decision fusion with multiple spatial supports by conditional random fields,” IEEE Trans. Geosci. Remote Sens., 56, 2018
[5] L. Bruzzone, R. Cossu, A multiple cascade-classifier system for a robust a partially unsupervised updating of land-cover maps, IEEE Trans. on Geosci. Remote Sens., Vol. 40, 2002.
[6] T. Sorriso, D. Marzi, P. Gamba, ”A General Land Cover Classification Framework for Sentinel-1 SAR Data.” in Proc. of the 2021 IEEE 6th Int. Forum on Research and Tech. for Society and Industry (RTSI), Napoli,. 2021.
[7] C. Paris, L. Orlandi, L. Bruzzone, “A Strategy for an Interactive Training Set Definition based on Active Self-Paced Learning,” IEEE Geosci. Remote Sens. Letters, Vol. 17, 2021.
[8] Y.T. Solano-Correa, K. Meshkini, F. Bovolo, L. Bruzzone, “A land cover-driven approach for fitting satellite image time series in a change detection context,” SPIE Conf. on Image and Signal Processing for Remote Sensing XXVI, 2020.
List of the other HRLC team members: C. Domingo (CREAF), L. Pesquer (CREAF), C. Lamarche (UCLouvain), P. Defourny (UCLouvain), L. Agrimano (Planetek), A. Amodio (Planetek), M. A. Brovelli (PoliMI), G. Bratic (PoliMI), M. Corsi (eGeos), C. Ottlé (LSCE), P. Peylin (LSCE), R. San Martin (LSCE), V. Bastrikov (LSCE), P. Pistillo (EGeos), M. Riffler (GeoVille), F. Ronci (eGeos), D. Kolitzus (GeoVille), Th. Castin (UCLouvain), R. San Martin (LSCE-IPSL), C. Ottlé, V. Bastrikov (LSCE-IPSL), Ph. Peylin (LSCE-IPSL).
With the rise of distributed constellations (Landsat, Sentinel, VIIRS, MODIS, Planet, Airbus, Maxar) there has been a general push to make earth observation data interoperable. This has led to the notion of harmonized data products. Moreover, there is a strong incentive for these data sources to be combined into virtual constellations to achieve high revisit rates for resulting sensor fusion products. Distributed constellations of small commercial satellites can deliver data that is higher in information density and radiometrically accurate when paired with traditional missions. Today, the need for high cadence time series to quantitatively leverage bio-optical models of vegetation and measure the impact human activity is driven by the urgency to measure the environmental dimensions of “sustainable development.”
Under the sponsorship of the European Union’s Horizon 2020 programme, we are exploiting the notion of a virtual constellation for the purpose of updating and maintaining the CORINE (CLC) land cover product, which is the flagship of the Copernicus Land Monitoring Service (CLMS). In our approach, we fuse global daily imagery from the PlanetScope contributing mission with Landsat 8, Sentinel-2, VIIRS, MODIS to create cloud free, harmonized, 3 m resolution, near-daily time series covering three full years starting from the latest CLC release: 2018, 2019, 2020. We sample half a million data cubes across the entire territory of the EU. We sample from each country relative to country surface area and perform stratified sampling with respect to the 44 CLC land cover classes. We label the data cubes based on the land cover classes present at each location in the 2018 reference year and we use this corpus to train machine learning models that can learn how to recognize the “pulse” of land cover types and detect changes on a short time scale in subsequent years.
The challenge is to develop novel AI architectures that can properly exploit the unprecedentedly high spatiotemporal resolution of these data streams and provide new insights into land cover dynamics. Exploratory data analyses show that 3 m daily time series of spectral indices such as NDVI are powerful indicators of biodiversity and excellent discriminators of land cover types. For instance, simple clustering of these fine temporal signatures can lead to segmentation of tree species at the crown level based on intraspecific and interspecific variations in leaf phenology measured in early spring and throughout the fall season, leading to a better assessments of forest composition. The same can be said about the phenometrics of agricultural crops and wild vegetation in general when captured at this scale. The high temporal cadence helps us improve our understanding of land use and challenging habitats such as wetlands, grasslands and pastures.
Our baseline models are supervised classification models that employ Convolutional Neural Networks (CNN) for spatial encoding of class distributions and Recurrent Neural Network (RRN) for modelling their temporal evolution. Of particular interest is the development of weakly or fully unsupervised spatiotemporal deep learning models that can learn to disentangle structural change from phenology based on multi-year observations and in the absence of labels. These constitute our more advanced models and rest on methodologies for self-supervised image representation learning. We compare and evaluate the potential impact of all these architectures for more continuous updates of the CLC product by showing results drawn from large regions of interest in Europe.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 101004356, Copernicus evolution: Research activities in support of the evolution of the Copernicus services.
The European Commission has been working to maximize the adoption of Copernicus data and information in Europe, to increase its socio-economic benefits for European citizens and businesses and to support the competitiveness European Earth Observation industry. To achieve these objectives, it is essential to continue and optimize the Copernicus User Uptake activities.
Copernicus User uptake initiatives include training and skills development, such as the Copernicus Skills Programme, EO4GEO and MOOCs (Massive Open Online Courses), thematic trainings and workshops. Copernicus Academies and Relays have been in place since 2017 as local information and coordination points to facilitate activities around Copernicus, its benefits, and opportunities for local residents and businesses.
In addition, each of the six core services develops and implements user uptake activities regularly within their domains – Land, Marine, Climate, Atmosphere, Emergency response and Security. Copernicus Member States also initiate numerous user uptake activities, both through the Copernicus Framework Partnership Agreement (FPA) and in national capacities.
Since 2021, the European Union Agency for Space programme is in charge of developing of downstream market and fostering of innovation for Copernicus commercial users. The Commission is currently starting the work to formulate a new user uptake strategy for the entire EU Space Program, based on earlier assessments and recommendations.
Copernicus data and products are widely used for monitoring and reporting purposes, and are useful in the implementation of several EU policies and directives, such as the Green Deal, the EU Zero Pollution Action Plan, the Methane Strategy and the Global Methane Pledge, the new EU Arctic Policy and more. The Knowledge Centre on Earth Observation (KCEO), launched in April 2021, serves as a focal point to make use of EO data for EU policymaking decisions and implementation.
This presentation will cover the main pillars of the Copernicus User Uptake activities, future plans and strategies.
With the creation of the first-ever integrated Space Programme, the European Union is reinforcing its strategy to harness the power of space to re-ignite its post-COVID economy, address climate change, transit to digitalization, and secure its autonomy and sovereignty.
In 2021 the European Union Agency for the Space Programme (EUSPA) was created, bringing all EU space activities under one roof and enabling space to contribute effectively to the priorities of the European Union agenda.
EUSPA is also responsible for the development of downstream markets and fostering of innovation based on Galileo, EGNOS, and now also for the commercial users of Copernicus, leveraging funding mechanisms such as Fundamental Elements and Horizon Europe.
In particular, bringing management of downstream and combined applications based on Galileo, EGNOS and Copernicus under the umbrella of one agency will make it possible increasingly to leverage synergies. On their own, these technologies can play a key role in supporting a digital and green transformation, but leveraging their synergetic and combined use will facilitate the generation of innovative solutions that bring a higher societal impact.
EUSPA, as an EU user-oriented agency, makes sure that these challenges are addressed through the design and development of new EU space-based services which meet the needs of the users, while ensuring their market uptake.
EUSPA creates opportunities for EU companies to explore new markets, through research and development initiatives, grants and prizes to enable new business opportunities and connect them with private investors and venture capitalists for the necessary financing capability to jump-start their business cases.
In collaboration with the European Commission, ESA, the Copernicus Entrusted Entities and all other relevant actors of the earth observation ecosystem, EUSPA will launch a series of activities to foster the adoption of Copernicus data and services across different market segments. These activities will complement and leverage existing market development efforts and promote the creation of innovative solutions making use of EU Space services towards a sustainable green and digital transition.
Earth Observation (EO) is the measurement of physical, chemical and biological systems of our planet earth. It delivers proxies of measurable parameters which can be detected via a sensor (optical, radar, laser) influenced by a variety of parameters (e.g. clouds, atmosphere, solar winds). Earth observation includes airborne and satellite based measurements as well as in situ sensor data (e.g. air or water temperature) for the purpose of calibration.
Using EO data for statistical purposes can contribute to the survey methodology, the analytical possibilities, timeliness, spatial coverage and semantic harmonisation. Several statistical domains like tourism, transport (port activities, lorry traffic) and production estimates (e.g. oil storage, car, agriculture) do have the potential to use earth observation data for statistical data production.
The presentation will outline a limited set of usage of EO at Eurostat and the European Statistical Systems (ESS) - the national statistical data providers - at large.
Inside Eurostat provides the LUCAS survey micro-data on land cover and land use, as well as environmental information, serving as in-situ observation for EO, which has been used by Member States to produce classified Copernicus maps based on a subset and to validate the results (EEA, 2015). Additionally, for the reporting of the Sustainable Development Goals (SDG) EO derived information is used for a restricted set of parameters. Lastly, A variety of EO information from space and airborne sensors is provided centrally by GISCO to facilitate spatial analysis and visual interpretation.
Eurostat facilitates the usage of Earth Observation at ESS level in the National Statistical Systems with the ESSNET Big Data program, and with individual Member States through the GEOS grants. Examples presented include classical approaches like land use/land-cover mappings, determining urban sprawl, crop recognition and yield prediction systems up to advanced systems using Artificial Intelligence and Machine learning to detect solar cells on roofs to support knowledge on transformation of energy generation/SDG. In addition, some NSIs have managed relevant projects on their own budget to early test the opportunities and implication of using earth observation data on similar topics. To increase the internal knowledge specific sessions are integrated in the European Statistical Training Program (ESTP) course program.
Sentinels data are increasingly being used to support decision making at different levels. Mandated to contribute to the European Green Deal amid growingly complex challenges, European public authorities can leverage Copernicus as an effective tool to make informed decisions, enforce environmental policies and build more sustainable and resilient lifestyles for European citizens. For instance, the use of the data helps them to e.g. improve efficiency, save costs, improve coordination and communication to the public, deter unlawful behaviours, assess potential risks. Often, the synoptic and regular views provided by the Sentinels allow to improve environmental monitoring capabilities of the agencies in charge beyond what is currently possible.
Successful use cases are increasingly gathered and published for Copernicus, collectively contributing to support the evidence about the benefits brought by the use of EO/Copernicus data. Although each case presents specificities on its own, the analysis of common processes and challenges can greatly contribute to deepen the understanding of the nature of value of the EO-derived information and to improve the design of services and user uptake strategies.
In this presentation, we will provide an overview about two relevant initiatives procured by the ESA and the EC.
First, the “Copernicus4regions” initiative will be presented. This is an initiative undertaken in cooperation with the Network of European Regions using Space Technologies (NEREUS) and provides an outstanding collection of 99 examples gathered from European regional and local authorities (see http://www.nereus-regions.eu/copernicus4regions/). Each story is characterised by a usage maturity level defined by the primary user.
After this, an overview of the latest findings from the Sentinel Benefits Study will be presented. This study is managed by the European Association of Remote Sensing Companies and performs detailed impacts assessments for selected use cases along complete value chains (i.e. from data distribution to data exploitation to society at large), with particular focus on cases of benefits for public administrations (https://earsc.org/sebs ).
The Twin ANthropogenic Greenhouse Gas Observers (TANGO) mission is a pioneering satellite mission comprising two satellites, TANGO-Carbon and TNAGOTANGO-Nitro flying in loose formation. The mission concept is developed through ESA’s SCOUT program and currently consolidated to be ready for a timely launch. TANGO envisages a unique European contribution to monitor globally and independently the emission of the anthropogenic greenhouse gases over the period 2025-2028. To this end, breakthrough technology will be used to quantify emissions of the greenhouse gases methane (CH4) and carbon dioxide (CO2) at the level of individual industrial facilities and power plants. The mission will demonstrate a distributed monitoring system that can pave the way for future larger constellations of small-satellites allowing for enhanced coverage and temporal resolution. The TANGO mission consists of two agile small-satellites, each carrying one spectrometer. The first satellite measures spectral radiances in the shortwave infrared part of the solar spectrum (1.6 µm) to determine moderate to strong emissions of CH4 (≥ 10 kt/yr) and CO2 (≥ 5 Mt/yr). The instrument has a field of view of 30 x 30 km2 at spatial resolutions small enough to monitor individual large industrial facilities (≤300 x 300 m2), with an accuracy to determine emissions on the basis of a single observation. Using the same strategy, the second satellite yields collocated NO2 observations from radiance measurements in the visible spectral range, supporting plume detection and exploiting the use of CO2/NO2 ratio. In essence, TANGO will provide surface fluxes of specific emission types based on the combination of CH4, CO2 and NO2 observations at a high spatial resolution following strictly open data policy. Mission operation will be based on scientific input on target selection. In doing so, TANGO aims to uniquely complement the large current and planned Copernicus monitoring missions like Sentinel-5(P) and the CO2M mission by providing unrivalled high-resolution monitoring of the major anthropogenic greenhouse gas emissions at on a regular basis.
Monitoring and understanding the Earth’s magnetic field and the ionospheric environment is key for both fundamental science and multiple applications. The Earth’s magnetic field protects our planet from incoming energetic charged particles and organizes the way the near outer space (the magnetosphere) and the ionized upper layers of the atmosphere (the ionosphere) respond to solar activity. This response can produce strong magnetic signals that can affect ground technology such as power transmission networks, radiation hazards that can affect satellites in the near outer space, and multiple ionospheric perturbations that can severely affect radio transmissions, radars and GNSS systems (hazards collectively known as space weather hazards). Monitoring Earth’s magnetic field and ionospheric environment is crucial for investigating all these phenomena. Identifying and understanding Earth’s magnetic field multiple sources is also crucial to aid precise navigation, reveal properties of the shallow and deep Earth, and provide key information for geophysical surveying for minerals.
The very successful on-going ESA Earth Explorer Swarm constellation revealed the considerable science value of using a well-conceived satellite constellation for such investigations. Building on Swarm’s achievements, NanoMagSat has been designed to demonstrate the ability of New Space technology to bring such studies to the next level of success.
The constellation will consist of an innovative low-Earth orbit (LEO) constellation of three 16 U nanosatellites, with a current baseline of two 60° inclined and one polar orbits, allowing much faster local time coverage of all geographic locations up to 60° North and South latitudes than is currently possible with Swarm. This constellation would also allow even better coverage, should NanoMagSat be launched while Swarm is still in operation. Each satellite will carry an identical payload consisting of an advanced Miniaturized Absolute scalar and self-calibrated vector Magnetometer (MAM) combined with a set of precise star trackers (STR), a compact High-frequency Field Magnetometer (HFM), a multi-needle Langmuir Probe (m-NLP) and dual frequency GNSS receivers. This payload will allow the production of absolute vector magnetic data at 1Hz sampling, scalar and vector magnetic data at 2 kHz sampling, electron density data at 2 kHz sampling, electron temperature data at 1 Hz sampling, as well as TEC and ionospheric radio-occultation data.
As already demonstrated in the context of the NanoMagSat Scout consolidation phase carried out in 2020, this combination of data, particularly those acquired at higher rates than on Swarm, and the proposed new constellation configuration, will allow improvement of the type of monitoring and investigations Swarm (and previous missions such as Oersted and CHAMP) achieved, also bringing entirely new science opportunities.
Science primary objectives begin with the precise recovery of the field produced by the geodynamo within the Earth’s core. This field, also known as the Earth’s main field, is critical to characterize and understand the way the Earth reacts to the incoming flux of energetic charged particles (the solar wind). Its precise knowledge is also used for many practical applications, both ground-based and space-borne. Recovering its fast dynamics is a top priority, as our still-limited knowledge of this dynamics severely hampers present efforts, relying on data assimilation and advanced numerical dynamo models, to predict main field evolution at the level requested by users. Twenty years of space-born observations by Oersted, CHAMP and Swarm, combined with ground observation data, has allowed great progress, making it possible to study inter-annual changes as well as abrupt changes known as geomagnetic jerks. Thanks to recent progress in numerical dynamo simulations, we now know that further observations are needed to fully characterize and understand these phenomena, since much faster changes can occur. It has not been possible to study such rapid variations with the existing satellite constellations. NanoMagSat will have the ability to considerably improve the situation by capturing core field signals with periods of as short as three months.
A second set of primary objectives will be to also improve our ability to recover fast changing planetary scale ionospheric and magnetospheric fields. These also need to be better monitored and understood. The mid and low latitude ionospheric field typically varies on a daily and seasonal basis, but significant day-to-day variability occurs in response to solar activity. In contrast to Swarm, NanoMagSat will have the ability to recover such variability. The magnetospheric field shows even stronger and faster dynamics. The ability of the NanoMagSat constellation to cover all local time scales at mid-latitudes over its orbital period will also make its recovery much easier. Characterizing both these fields, and the companion fields produced by the electrical currents they induce in the solid Earth, will not only help understand the way the Earth responds to solar activity, but also help reconstruct the still poorly known conductivity structure of the solid Earth.
A third set of primary objectives will take advantage of the innovative payload combination of NanoMagSat to investigate the ionospheric environment. As demonstrated by the experimental “burst mode” of the absolute magnetometers (ASM) on board the Swarm satellites (scalar data acquired at 250 Hz), whistlers produced by lightning strikes in the neutral atmosphere can be detected at LEO altitude and used to sound the state of the ionosphere below the satellites. NanoMagSat will have an extensive ability to even better do so, thanks to the 2 kHz sampling rate of its vector magnetometers. Such information, together with the TEC data, ionospheric occultation data (which Swarm lacks), and local electron density data will provide a powerful means to monitor the state of the ionosphere, to both investigate its dynamics and improve ionospheric models, such as the International Reference Ionosphere (IRI) model. Investigation of the local smaller scale dynamics of the ionosphere will also be made possible thanks to the joint use of 2kHz sampling vector magnetic and electron density data. This will provide access to ionospheric meter-scale plasma density structures and allow monitoring of the electrical currents testifying for the energy input that feeds them from the magnetosphere.
Additional secondary, but equally important, science objectives have also been identified. Some are already addressed by Swarm, but would considerably benefit from both longer (ideally permanent) satellite observations and could also greatly benefit from the joint use of NanoMagSat and Swarm data, should NanoMagsat be launched while Swarm is still in operation. This is the case for the magnetic signals produced by oceanic lunar tides. NanoMagSat would allow these minute signals to be recovered faster and more accurately. As already demonstrated, these signals can be used to sense the electrical conductivity of the uppermost regions of the solid Earth. On the long term, they could also potentially be used to assess the evolution of the temperature of the oceans (the magnitude of the tidal signals being sensitive to ocean temperature), thus contributing to monitoring one key parameter of climate global change. Additional signals produced by the global ocean circulation and its variability could also potentially be investigated, at time (month to interannual) and length scales expected to be accessible with NanoMagSat.
Another important secondary science objective that could benefit from NanoMagSat, especially if operated jointly with the Swarm constellation, is the recovery of the magnetic field signals produced by magnetized rocks within the lithosphere. Maps of these provide invaluable information about the nature and thermal state of the lithosphere and its deep- seated rocks, as well as about their tectonic history. This objective requires making the best of all missions, as it benefits from the accumulation of data over long periods. The Swarm constellation (with two satellites side-by-side) was optimally designed for that purpose. However, the much better local time coverage provided by NanoMagSat could be taken advantage of in order to better remove signals produced by all other sources and assist in better isolating this lithospheric signal.
Many additional possible secondary objectives are also now under study, thanks to the support of the science community, following a dedicated NanoMagSat brainstorming session organized in Athens in October 2021, as a follow-up of the 11th Swarm Data Quality Workshop. In particular, exciting ideas have emerged that could take advantage of the ability of the NanoMagSat payload to study ionospheric-magnetospheric dynamics, as a stand-alone mission or in conjunction with other missions.
In this invited talk, we will strive to illustrate all the numerous science objectives NanoMagSat could achieve, also reporting on E2E simulations that will have been run by the time of the meeting. Since we just learned that following successful negotiations with ESA, the NanoMagSat project will soon (January 10, 2022) kick off a new technical phase of Risk Retirement Activities for a period of 18 months, we will also report on how we have started backing up this phase with appropriate science preparation activities. All scientists willing to contribute to this effort to further enhance the science return of the NanoMagSat mission and demonstrate the potential of New Space for such science are warmly welcome to join. Beyond this initial mission, NanoMagSat could indeed be used as a stepping-stone for permanent low-cost monitoring of the Earth’s magnetic field and ionospheric environment.
Scout missions are a new Element in ESA’s FutureEO Programme, demonstrating science from small satellites. The aim is to tap into the New Space approach, targeting three years from kick off to launch, and within a budget of €30m, including launch and commissioning of space and ground segments. The Scout missions are non-commercial and scientific in nature; data will be made available freely using a data service delivery approach. HydroGNSS has been selected as the second ESA Scout Earth Observation mission, primed by Surrey Satellite Technology Ltd, with support from a team of scientific institutions - Sapienza University of Rome, Institute for Space Studies of Catalonia, Finnish Meteorological Institute, Tor Vergata University of Rome, “Nello Carrara” Institute of Applied Physics, National Oceanography Centre and University of Nottingham. The microsatellite uses established and new GNSS-Reflectometry techniques to take four land-based hydrological climate variables; soil moisture, freeze/thaw, inundation and biomass. The initial project is for a single satellite in a near-polar sun synchronous orbit at 550 km altitude that will approach global coverage monthly, but an option to add a second satellite has been proposed that would halve the time to cover the globe, and eventually a future constellation could be affordably deployed to achieve daily revisits.
GNSS Reflectometry has been developing rapidly as an L-Band remote sensing technology suitable for small satellites, a form of passive bi-static radar that uses GNSS satellites as radar sources, and it continues to find applications over ocean, ice and land. The UK TechDemoSat-1 and NASA CYGNSS missions were primarily flown to target wind speed over the ocean but enabled demonstration of the potential for cryospheric and hydrological applications. Over rough surfaces, for example oceans, a resolution of approximately 25 km is expected, but over flat surfaces such as sheltered rivers, the signal becomes coherent and the resolution achieved can be better than 1 km, while the resolution achievable over land varies between 1 km and 25 km depending on the characteristics of the terrain. The ESA Scout-2 HydroGNSS mission will use established and new measurements, including Galileo signals, coherent reflection sensing, dual polarisation and dual frequency reflectometry exploration to improve the resolution and separation of hydrological measurements. Although targeting land, measurements will also be collected over ocean and ice, with the prospect of a number of secondary applications.
The instrument is a development based on technology used on TechDemoSat-1 and CYGNSS missions, upgraded to allow for additional measurements. The nadir antenna uses 2 x 2 array of metal patches to implement an efficient dual frequency dual polarisation antenna. Low noise amplifiers incorporate cavity filters and calibration switches and the receiver is able to collect multiple GPS and Galileo reflections simultaneously at dual polarisation and at dual frequencies, L1/E1 and L5/E5. Signal processing uses open loop predictions to target reflections at each specular point and collect measurements in the form of Delay Doppler Maps (DDMs). The platform is a 55kg implementation of the SSTL-Micro, incorporating many features advantageous to a GNSS reflectometry mission. The accurate and agile attitude capability is enabled by an attitude control system using star trackers and wheels. Propulsion allows for collision avoidance and phasing and maintenance of a future constellation. A large capacity data storage and high rate X-band downlink allows acquisition of targeted raw sampled captures in addition to the routine on-board processed DDMs.
The ground segment makes use KSAT station in Svalbard for both telemetry, telecommand and control link as well as payload downlink, where the high latitude frequent passes could be an enabler for fast data availability for future weather applications. SSTL’s ground station in Guildford is available as backup. The data is processed by the Payload Data Ground Segment (PDGS), which prepares different levels of products. The Level 1 data comprises of GNSS DDMs and coherent measurements, and are made available with sufficient metadata for calibration and recovery of surface reflection coefficients at the specular reflection points. Level 2 operational processors are supplied to the PDGS by scientific partners, and these will allow the operational recovery of the climate variables, soil moisture, inundation, freeze/thaw and biomass, plus secondary products over the ocean of ocean wind speed and sea ice extent. Level 1 and Level 2 products will be shared publically with registered users over the web using a similar platform to “MERRByS” that shared the TechDemoSat-1 data.
The project is now underway – the HydroGNSS mission contract was signed between ESA and SSTL on 11th October 2021, and a ride-share launch in 2024 is currently being negotiated. Upon launch, there will be a concerted effort to commission the satellite, payload and PDGS and undertake preliminary validation of all the Level 2 products within 6 months. HydroGNSS will continue to explore the technique of GNSS-Reflectometry while laying the foundations for a future constellation offering high spatial-temporal resolution observations of the Earth’s weather and climate.
HydroGNSS is a mission concept selected by ESA on 2021 as the second Scout small satellite in the frame of the FutureEO Earth Observation programme. The satellite is planned to be launched in 2024 after a 3-year development phase and is being developed by Surrey Technology supported by a scientific team composed by Sapienza University of Rome, IEEC in Barcelona Spain, NOC in Southampton UK, Finland Meteorology Institute, CNR/IFAc in Sesto Fiorentino Italy, Tor Vergata University of Rome.
The mission targets the hydrological cycle and is based on the GCOS requirements for monitoring Essential Climate Variables (ECV). Water is a natural resource vital to climate, weather, and life on Earth. It manifests itself in or on the land in different ways, for example, moisture in the ground, wetlands and rivers, snow and ice, and vegetation density. Global knowledge of land water content and state is important in its different forms for many reasons:
• Soil moisture knowledge is needed for weather forecast, hydrology, agriculture analysis, and wide scale flood prediction
• Freeze / thaw state is an important variable that helps understand permafrost behaviour in high latitudes, a key issue in climate change as a methane source.
• Above Ground Biomass feeds into understanding of carbon stock in forests and a sink in the carbon dioxide cycle, and it also has a coupling to biodiversity
• Wetlands are fragile ecosystems that also can be sources of methane, and often hidden under forest canopies
Increasingly complex and accurate models are used to characterise and forecast hydrological processes. Earth System Models (ESMs) are used for climate, and Numerical Weather Prediction (NWP) models for weather forecasting. A better knowledge of these processes requires hydrological observational data to be assimilated into models to ensure correspondence with the complexity of the real world.
The HydrGNSS mission is based on the GNSS Reflectometry (GNSS-R) technique that collects the signal transmitted by the navigation satellites and scattered by the Earth surface in the near specular direction. GNSS-R has consolidated applications for ocean monitoring, which have been demonstrated by past and current missions (UK TEchDemosat-1, NASA CyGNSS) but land applications have been also proved. Specifically, the target Level 2 products for HydroGNSS are surface soil moisture, inundation/wetland, freeze/thaw state and forest above ground biomass, which are partially coincident with ECV’s (moisture and biomass) or strictly related to them (freeze / thaw and wetland).
This presentation illustrates the products that will be delivered by HydroGNSS, the requirements dictated by GCOS and the performances expected from HydroGNSS based on the state of art of the GNSS-R technique. The development of an End to End simulator is a first step to demonstrate the mission is capable to achieve the Scientific Readiness Level 5 and the achievements in this field by the research group will be presented.
Earth is changing at an unprecedented pace. Awareness of the socio-economic impact of climate change has also been growing and so has the international consensus on the urgency of understanding and limiting the impacts. To do this successfully, we need to understand and quantify the processes driving the change and its future, and particularly the role played by the atmosphere.
The composition of the Upper Troposphere and Stratosphere (UTS) plays a significant role in controlling the Earth’s climate, but there are still poorly explored feedbacks within the Earth System. This region is coupled to the surface and the free troposphere both dynamically and radiatively. Its composition is strongly affected by anthropogenic emissions of greenhouse gases (GHG) and pollution precursors, and is subject to changes via radiative, dynamical, and chemical processes. As the Earth’s atmosphere is changing, so is the UTS. The rapid vertical and horizontal variations in the abundances of trace gases and aerosol around the tropopause, as well as strong and differing trends either side of the tropopause, have made trend detection challenging in this region with a knock-on effect on estimates of their climate impact.
The overall goal of CubeMAP is to study, understand, and quantify processes in the UTS, study its variability, and contribute to analysis of trends in its composition and the resulting effects on climate and vice-versa. The mission focuses on tropical regions, which are most critical for UTS processes, as the strong vertical transport in these latitudes means this is effectively the “gateway to the stratosphere”.
The space sector is also changing at a fast pace, which ought to benefit innovation for science missions. CubeMAP embraces a high level of innovation and studies the UTS processes with a unique combination of high accuracy, high vertical resolution, and enhanced geographical and temporal coverage, owing to a novel sounding approach that leverages small satellite benefits. These measurement characteristics are achieved by measuring high spectral resolution atmospheric transmission spectra at the limb, exploiting the high signal to noise ratio delivered by solar occultation. The limited coverage usually associated with solar occultation limb sounding is obviated through the deployment of a nanosatellite constellation of identical sounders, equally distributed in one orbital plane.
The mission makes uses of GOMSpace 12U cubesat platforms, with electric propulsion and enhanced pointing control. The CubeMAP fleet is made of three spacecrafts. Each spacecraft embarks three thermal infrared laser heterodyne spectrometers for high spectral resolution atmospheric transmittance spectroscopy, miniaturized using hollow-waveguide integration technology. The scientific payload also includes a solar disc imager required for determining the pointing knowledge of each spacecraft, which also provides hyperspectral atmospheric transmittance data over 16 channels in the visible and infrared by using a multi spectral focal plane mosaic array.
In addition to its immediate scientific objectives, CubeMAP contributes to developing a novel resilient approach to atmospheric observation to be scaled up: it offers a high level of deployment flexibility and system modularity, as well as economy of scale, through the use of identical payloads and platforms but targeted towards specific Earth observation goals. CubeMAP is highly complementary to the existing nadir sounding infrastructure and will add value by enhancing its outputs and exploitation.
The chemical composition of the upper troposphere and lower stratosphere – the UTLS – is spatially and temporally highly variable and determined by a range of factors such as transport, chemistry, and tropopause dynamics. The UTLS is also thought of being very sensitive to climate change. The tropopause in particular has been dubbed the canary of the climate system due to its sensitivity to radiative forcing. Because of long radiative timescales, a small forcing leads to a large response, with knock-on effects on transport and composition. However, given a rather sparse sampling (in space and/or time) and limited precision of available observations, accurately quantifying UTLS composition variability and trends and identifying their drivers poses a considerable challenge.
CubeMAP – CubeSats for Monitoring of Atmospheric Processes – focusses on exactly addressing this challenge. To this end, CubeMAP is putting forward a flexible, fast to implement, and highly innovative modular measurement system which will make observations at tropical and sub-tropical latitudes. It will observe chemical trace gases such as water vapour, carbon dioxide, methane, ozone, and nitrous oxide and their isotopologues, as well as aerosols – all of which play a key role in radiative forcing and climate change and which will shed light and help quantify diverse climate processes.
In this contribution, we will review open science questions CubeMAP will help to address, including how natural and anthropogenic emissions (e.g. from wildfires, deep convection, long-range transport) will affect UTS composition, how water vapour in the UTS will change in response to climate change, how these changes will feed back on climate, how climate change will alter transport in the stratosphere and impact the ozone layer and its recovery, and how knowledge of the vertical distribution of greenhouse gases will help improve the quantification of estimates of their surface emissions.
Weather-related hazards are ubiquitous around the world including: 1) hurricane storm surges, 2) rapid snowmelt and heavy rainfall, 3) severe weather leading to flash floods, and 4) seasonal freeze and thaw of rivers that may lead to ice jams. Each of these hazards affects human settlements and has the potential to impact agricultural productivity. In each setting, end-users in disaster management need access to data processing tools helpful in mapping past and current disasters. Analysis of past events supports risk mitigation by understanding what has already occurred and how to alleviate those impacts in the future. Having capabilities to generate the same products in a response setting means that lessons learned from risk analysis will carry forward to event response. Synthetic aperture radar (SAR) data from the ESA’s Sentinel-1 mission are particularly useful for these activities due to their all-weather 24/7 monitoring capabilities, global and regular sampling strategy, and free-and-open availability.
Here we present HydroSAR, a scalable cloud-based SAR data processing service for the rapid generation of hazard information for hydrological disasters such as flooding, storm surge, hail storms, and other severe weather events. The HydroSAR project is led by the University of Alaska Fairbanks in collaboration with the NASA Alaska Satellite Facility (ASF) DAAC and the NASA Marshall and Goddard Space Flight Centers. As part of this project we developed a series of SAR-based value-added products that extract information about surface hydrology (geocoded image time series, change detection maps, flood extent information, flood depth maps) and flood impacts on agriculture (crop extent maps, inundated agriculture information) from Sentinel-1 SAR data.
To enable automatic, low-latency generation and distribution of these products, HydroSAR utilizes a number of innovative software development, cloud computing, and data access technologies. HydroSAR is fully implemented in the Amazon Web Service (AWS) cloud, resulting in a high-performance and scalable production environment capable of continental-scale data production. The service is co-located with ASF’s AWS-based Sentinel-1 data holdings to accelerate data access, reduce data migration, and avoid egress costs. The HydroSAR implementation heavily leverages ASF’s cloud-support services such as OpenSARLab (cloud-based algorithm development) and the ASF Hybrid Pluggable Processing Pipeline (HyP3; cloud acceleration and elastic scaling). HydroSAR is fully open source. All HydroSAR technology is Python based and accessible via GitHub, and HydroSAR products are stored in publicly accessible AWS S3 cloud storage. To ease integration into GIS environments and other web services, HydroSAR products are served out using REST endpoints and ArcGIS Image Services.
In our presentation we will introduce the SAR-based products that were developed for this effort. We will describe our approach for product calibration/validation and provide metrics for product performance. We will outline the cloud-based production pipeline that was built to automatically generate our data products across large spatial scales. We will demonstrate the benefit of ArcGIS image services for data sharing and data integration. Finally, we will showcase the integration of HydroSAR products into disaster response activities for a number of recent disasters including the 2021 U.S. hurricane season, the 2021 Alaska spring breakup flooding, and the 2021 flood season in Eastern India, Bangladesh, and Nepal.
The latest developments in big cloud computing environments, supporting ever increasing volumes of Earth Observation (EO) data, present exciting new possibilities in the field of EO. Nevertheless, the paradigm shift of deploying algorithms close to the data sources also comes with new challenges in terms of data handling and analysis methods. The Euro Data Cube (EDC) was developed in this regard, to facilitate large-scale computing of EO data by providing a platform for flexible processing, from the local to the continental scale. Not only does the EDC offer a workspace with customisable levels of computational resources, it also provides on-demand cloud-optimised fast access to tens of petabytes of satellite archives from open and commercial sources with proven scalability. The platform also offers a growing ecosystem of inter-compatible modules supporting users in all steps of data analysis, processing and visualisation.
To showcase the potential of the EDC, we will present how the platform supported the development of algorithms for a spatio-temporal drought impact monitoring system in Europe. Using new Copernicus Land Monitoring Service (CLMS) High Resolution Vegetation Phenology and Productivity data (HR-VPP) we produced indices on vegetation dynamics as a response to soil moisture deficit. The exercise was run at a 10-meter resolution for the period 2017-2020 over Europe. Owing to the large volume of the vegetation and soil moisture anomaly time series (0.6 PB), the datasets could not be processed using local computing resources. Simply adding more computing resources or even using a High Performance Computing (HPC) facility would not solve the problem either, given the large volumes of regularly-updated data needed to be transferred between clouds. The use of the EDC deployed on CreoDIAS, which also host the HR-VPP data as part of the WEkEO infrastructure, made it possible to produce efficient, transparent, repeatable and ad-hoc updates of drought impact indicators, which are an essential input to the European Environment Agency supporting European Policy making.
The use-case presented is just one example of how the EDC can be used to monitor continental phenomena at high spatial and temporal resolutions. Stemming from this example, we will present the EDC technology stack and services at the disposal of users to develop and deploy large-scale algorithms that in turn, can be triggered as on-demand algorithm executions in a Software-as-a-Service style. We will also discuss the data scientist’s workflow and adjustments that were needed in this regard, to transfer a typical remote sensing exercise into a scalable process.
Many global and continental scale mapping projects today have to solve the same problems: finding an infrastructure with the required datasets, setting up a scalable processing system to produce results, configuring preprocessing workflows to generate Analysis Ready Data (ARD), and then scaling from test runs to the final continental or global map. After that appropriate metadata needs to be generated followed by a dissemination phase where products have to be made ready for viewing. OpenEO platform solves most of these problems, but how does that work out in a real-world scenario?
To demonstrate the efficiency and useability of OpenEO platform (https://openeo.cloud) we have generated a European crop type map at 10m resolution. The map is based on both Sentinel-1 and Sentinel-2 data, which makes it a suitable blueprint for many other mapping projects. The feature engineering capabilities of the platform transform this data into a set of training features, which are then sampled across Europe to build a training and validation dataset. The built-in classification processes are finally used to train a model, which is used subsequently to generate the final map, which is delivered as a set of cloud optimized geotiffs with SpatioTemporal Asset Catalog (STAC) metadata. The final product will be validated using a harmonized LPIS dataset based on the availability at country level.
In this presentation we will give an overview of the full product R&D and production lifecycle. We report on efficiency gains and potential remaining bottlenecks for users. This gives a good overview of platform capabilities, but also of the overall maturity with respect to being production-ready for continental scale mapping efforts. For the large scale production of the map, processing capacity available at VITO, CreoDIAS and EODC will be used containing large and local, data archives extended with data on Sentinel Hub accessed via Euro Data Cube. This federation of data and processing capacity is made transparent by openEO platform. A built-in large scale data production component is responsible for distributing the workload across the available infrastructure. This aims to show that OpenEO platform has reached a maturity level that allows users to engineer an entire ML pipeline from data to final classification on a large scale.
The WorldCereal project aims at developing an efficient, agile and robust Earth Observation based system for timely global crop monitoring at field scale. The resulting platform is a self-contained, cloud agnostic, open-source system.
The global agricultural monitoring requires a solution to an increasingly complex computational problem. Therefore, high performance computing became a critical resource and that is why the need of a processing cluster arose. The project delivers an open-source EWOC system which automatically ingests and processes Sentinel-1, Sentinel-2 and Landsat 8 time series in a seamless way to develop an agro-ecological zoning system and disseminate high added value products to the users through the visualization portal using the geoTIFF COG format.
The system can be installed over a cluster which is driven by Kubernetes. The cluster itself may be hosted by a provider of choice. (CreoDIAS, AWS, etc... )
A set of open-source tools have been used for creating the process framework. Python has been mostly used for connecting the different processors and making them run. To achieve the results, the cluster has first to be scaled up to several desired nodes which are interconnected computers working as a single system. The operating system on these nodes is Linux. Furthermore, whenever a processing is needed (due to major growing season ending or by a user’s wish), a workflow started by Argo tool is scheduling the pods over the previously spawned nodes. These pods are in fact a collection of self-contained (i.e., shipped as Docker containers) applications or systems that produce high value added EO products such as crop masks, crop type maps and irrigation maps. The pre-processing chains for Sentinel 1, Sentinel 2 and Landsat 8 are first run in a parallel processing to provide enough dataset for the classification processor which in turn is providing the WorldCereal products.
At the center of the system is a Postgresql database which is used to organize the pending tasks and the upcoming productions. This precise tracking allows resilience in case a process failure, a re-processing is then automatically scheduled by Argo.
The information regarding the processing of the results as well as the S3 bucket paths to these results are stored in the database which grows in time. The progress of a running workflow may be monitored over a map which contains an overview of the tiles which cover the area for which the processing is run.
The goal of the cluster itself is to simplify the parallel pre-processing and the classification processing with the use of multiple data-sources as input. In case of a node failure (because of different causes like spot instance node termination in case of AWS provider, or a hardware failure in case of CreoDIAS provider) the system may recover whenever a new node is available.
Current system demonstration instance, hosted on a DIAS, benefits from the extensive amount of already available Earth Observation Data. This synergy offered by the cloud provider and the associated available data offers the perfect operational context for our cloud-based system. For additional data resources, the EWoC system seamlessly interfaces with external providers using its own dedicated data retrieval abstraction layer. This last component, overcoming some limitation of DIAS data offer could benefit, as an open-source piece of software, to other projects.
The EWoC system is a demonstration of an operational, open-source, cloud-based processing platform from which other future projects can benefit through its many reusable components.
Big data always presents new challenges, which can lead to innovation and potential improvements. At EOX we were recently at a breaking point and have reworked our EOxCloudless (https://cloudless.eox.at) data processing infrastructure almost from scratch. For those unfamiliar with the label this data is the one to generate our satellite map layer known as “Sentinel-2 cloudless” (https://s2maps.eu).
# Splitting the responsibilities - Kubernetes
Autoscaling is a tough problem and we know that managing large compute clusters is a responsibility we should delegate to our engineering team. Using Kubernetes
(https://kubernetes.io/de/docs/concepts/overview/what-is-kubernetes/) as the core abstraction to interface between our teams was an undisputed decision based on our experience in other operational deployments of EOxHub like “Euro Data Cube” (https://eurodatacube.com/), “Earth Observing Dashboard” (https://eodashboard.org/), and “Rapid Action on coronavirus and EO” (https://race.esa.int/). Our EOxHub technology, able to mediate on top of Kubernetes with platform resources on different clouds, enables us to both handle the workloads and to automatically scale the infrastructure based on processing needs.
# Splitting the workload - dask
With only a handful of options for handling concurrent/parallel workflows in python, we have opted for the meanwhile well established “dask” (https://dask.org/). For local processing we have adapted the “dask - Futures”, which is an extension for natice python‘s “concurrent futures'' (https://docs.python.org/3/library/concurrent.futures.html), to our use case. This with the help of “dask.distributed” (https://distributed.dask.org/en/latest/) runs concurrently a huge number of tasks locally and on a cluster as well.
# Geographical splitting/tiling of the workload - tilematrix
What is a “task” in this scenario? In “dask futures” itself it can be any arbitrary python function which runs itself and returns a result, the future is collected after completion of any single task.
For splitting geospatial data we have written our custom tiling package for regular matrices called “TileMatrix” (https://github.com/ungarj/tilematrix). Apart from the default EPSG:4326 and EPSG:3857 grids, it allows defining custom grids in any usual coordinate reference system. These grids provide a tile structure following a classical tiling scheme {zoom}/{row}/{column} ({z}/{y}{x}.{extension}). This is similar to the “OGC Two Dimensional Tile Matrix Set” (https://docs.opengeospatial.org/is/17-083r2/17-083r2.html).
# Mapchete meets dask
How to run a mosaicking script handling thousands of Sentinel-2 scenes distributed over the globe?
By doing exactly this even before this update for several years with different tools, for us this only took adding the dask know-how into the mix. Yet another python package originating from EOX called “Mapchete” (https://github.com/ungarj/mapchete) was extended by the “dask.distributed” package opening it to the dask environment, which gave it a direct way to be connected to a couple of cluster connection options offered by the dask community.
# Sentinel-2 data access optimization
Apart from extended concurrency options, getting a hand on a huge amount of Sentinel-2 data can be somewhat costly, difficult, and time-consuming. To shed some light on this, let’s look over some of the options that people interested in doing similar processing might consider doing.
Sentinel-2 data access:
1. https://scihub.copernicus.eu/
2. https://www.copernicus.eu/en/access-data/dias
Sentinel-2 data at Amazon s3 (AWS s3):
1. https://registry.opendata.aws/sentinel-2/
2. https://registry.opendata.aws/sentinel-2-l2a-cogs/
Let’s start with some of the basics, apart from the regular access to Sentinel-2 data, AWS offers under its “opendata” policy the datasets we need to produce our annual global mosaic, therefore we opt for AWS as the cloud and data provider. From AWS (1.) we can get all the metadata we need to merge the granule products into datastripes (acquisition stripes) and the angles and geometries for the angular correction (BRDF) to compensate for the Sentinel-2 L2A sen2cor imbalanced values in the west-east direction. The disadvantage of the AWS number one (1.) is that most of the data requests are currently under “requester-pays” settings, which for a couple of billion requests induces a not negligible cost.
Thankfully the AWS number two (2.) has currently no such restriction and it has one more advantage. The Sentinel-2 data there are stored as Cloud Optimized Geotiffs (COGs), which improves GDAL reading speed significantly as well as being free to make requests from the AWS s3 storage. Both of the AWS s3 endpoints provide “Spatio Temporal Asset Catalog” (STAC) accessibility, which makes search and browse extremely convenient once the STAC structure and tools have been understood (https://stacspec.org/).
The data is being read and BRDF corrected by custom mapchete extensions, which take care of all the metadata transformation, reading, reprojection, and corrections. After that they are submitted to the mosaiking process, which gets the masks and finally evaluates the timeseries based on provided settings to deliver a homogenous, cloudless mosaic output.
# Scale up with Dask-Kubernetes
Finally it comes down to scaling up the processing from the local environment to something much bigger. Each task needs to be processed not on one server or Virtual Machine (VM), but on a random dask-worker within the kubernetes cluster.
The advantage of this is that dask cluster supports adaptive dask-worker scaling, which directly controls the kubernetes resources scaling as well, this requests and launches cloud resources only when needed and shutting them down when idle.
The “dask-kubernetes”(https://kubernetes.dask.org/en/latest/) is paired with a so called “dask-gateway” (https://gateway.dask.org/) to handle hardware specifications and optionally to have different tiers of users to not allow everyone to use all the resources.
For the overall management and config of the kubernetes a Flux (https://fluxcd.io/) configuration has been set up to handle the top control of the kubernetes environment.
# Use the output data
Lastly, let's see how we can use the generated data. The output data organized as GeoTIFF Tile Directories can be read by the GDAL STACTA (Spatio Temporal Asset Catalog - Tiles Assets) driver (https://gdal.org/drivers/raster/stacta.html), which allows users to get, convert, or subset the data with GDAL directly from object storage.
Furthermore with some of the recent updates to OpenLayers (https://openlayers.org/) GeoTIFFs organized as STACTA can be loaded in the browser directly as shown in the exampleat https://openlayers.org/en/latest/examples/cog-pyramid.html. This provides the option to support all sorts of interactive rendering and analysis of the original 16bit data values directly in the browser.
# Conclusion
By the time of the Living Planet Symposium (LPS 2022) in May 2022 the next update of the EOxCloudless satellite map for the year 2021 will be available at https://s2maps.eu/. With that at hand we will have the results of an actual global benchmark for this updated big data processing workflow to present.
The presentation of this work will include the presentation of the workflow as described here and a live demonstration, partly as a tutorial.
We present here the recent progress of the DACCS (Data Analytics for Canadian Climate Services) project [1], funded by the Canadian Foundation for Innovation (FCI) and various provincial partners in Canada, in response to a national cyberinfrastructure challenge. The project’s development phase is set to finish in 2023, while funding for maintenance is planned over ten years. DACCS leverages previous development efforts of PAVICS [7], a virtual laboratory facilitating the analysis of climate data, without the need to download it. A key aspect of DACCS is to develop platforms and applications that combine both climate science and Earth observation, in order to stimulate the creation of architectures and cyber infrastructures shared by both.
In climate sciences, it is now common to manipulate very large and highly dimensional volumes of data derived from climate models such as the CIMIP5 and soon CIMIP6 datasets. In Earth Observation (EO), the concepts of time series and ARD data also lead us to manipulate multidimensional data cubes. Consequently, the two domains share a common set of requirements, especially in terms of visualization services and deployment of applications close to the data. The overall philosophy of the platform is to provide a robust backend with interoperable services with the ability to deploy processing applications close to the data (A2D).
The proposed platform architecture adopts some of the characteristics of the EO Exploitation Platform Common Architecture (EOEPCA) [12], including user access, data discovery, user workspaces, process execution, data ingestion and interactive development. This architecture describes A2D use cases where a user can select, deploy and run applications on distant infrastructures, usually where the data resides. An Execution Management Service (EMS) transmits such a request to an Application Deployment and Execution Service (ADES) [8].
Python notebooks are offered as the main user interaction tool. This way, advanced users can keep the agility of Python objects, all while offering the option to call demanding A2D tasks via widgets. Beyond the technical contribution, the platform offers an opportunity to explore synergies between the two scientific domains, for example to create hybrid machine learning models [6].
The platform ingests Sentinel-1 & 2 and Radarsat Constellation Mission (RCM) images in order to create multi-temporal and multi-sensor datacubes. The EO ingestion is performed by Sentinel Application Platform (SNAP) [13] workflows packaged as application inside Docker containers [11]. The goal is to provide researchers the ability to create data cubes on the fly via python scripts, with an emphasis on replicability and traceability. Resulting data cubes will be stored in NetCDF format along with the appropriate metadata to be able to document data provenance and guarantee replicability of the processing. Workflows and derived products will be indexed in a STAC catalog. The platform is aiming at facilitating the deployment of typical processing tasks including standard ingest processing (w.g. calibration, mosaicking, etc.), but also advanced Machine Learning tasks.
Aside from EO use cases, DACCS project should advance PAVICS climate analytics capabilities. The platform readily allows access to several data collections ranging from observations, climate projections and reanalyses. PAVICS relies on the Birdhouse Framework [9], a key component of the Copernicus Climate Data Store [10]. PAVICS is the backbone of the analytical capabilities of ClimateData.ca, a portal created to support and enable Canada’s climate change adaptation planning, and improve access to relevant climate data.
Climatedata.ca provides access to a number of climate variables and indices, either computed from CMIP5 and CMIP6 projections downscaled to Canada [3], or from historical gridded datasets [4]. Variables can be visualized on maps or on graphs, either on the basis of their grid cell element or by spatial aggregates, such as administrative regions, watersheds or health regions. Climatedata.ca also provides up-to-date meteorological station data, standardized precipitation evapotranspiration index (SPEI) [5], as well as sea-level rise projections.
The portal’s Analyze page [14] allows users to select their grid cells or regions, and to set the thresholds, climate models, RCPs and percentiles they would like to use for analysis. User queries are then sent to the PAVICS node through OGC API - Processes. Computations are realized by xclim[2], a library of functions to compute climate indices from observations or model simulations. The library is built using xarray and benefits from the parallelization handling provided by dask.
References
[1] Landry, T. (2018). "Bridging Climate and Earth Observation in AI-Enabled Scientific Workflows on Next Generation Federated Cyberinfrastructures", AI4EO session, the ESA Earth Observation Φ-week, November 2018, ESA-ESRIN, Frascati, Italy.
[2] Logan, T. et al. (2021): xclim - a library to compute climate indices from observations or model simulations. Online. https://xclim.readthedocs.io/en/stable/
[3] Cannon, A.J., S.R. Sobie, and T.Q. Murdock (2015): “Bias Correction of GCM Precipitation by Quantile Mapping: How Well Do Methods Preserve Changes in Quantiles and Extremes?”, Journal of Climate, 28(17), 6938-6959, doi:10.1175/JCLI-D-14-00754.1.
[4] McKenney, Daniel W., et al. (2011): "Customized spatial climate models for North America." Bulletin of the American Meteorological Society, 92.12: 1611-1622.
[5] Tam BY, Szeto K, Bonsal B, Flato G, Cannon AJ, Rong R (2018): “CMIP5 drought projections in Canada based on the Standardised Precipitation Evapotranspiration Index”, Canadian Water Resources Journal 44: 90-107.
[6] Requena-Mesa, Christian, et al.(2020) "EarthNet2021: A novel large-scale dataset and challenge for forecasting localized climate impacts." arXiv preprint arXiv:2012.06246, https://www.climatechange.ai/papers/neurips2020/48
[7] The PAVICS platform. Online. https://pavics.ouranos.ca/
[8] Landry et al. (2020): “OGC Earth Observation Applications Pilot: CRIM Engineering Report”, OGC 20-045, http://www.opengis.net/doc/PER/EOAppsPilot-CRIM
[9] C. Ehbrecht, T. Landry, N. Hempelmann, D. Huard, and S. Kindermann (2018): “Projects Based in the Web Processing Service Framework Birdhouse”, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-4/W8, 43–47
[10] Kershaw, Philip, et al. (2019): "Delivering resilient access to global climate projections data for the Copernicus Climate Data Store using a distributed data infrastructure and hybrid cloud model." Geophysical Research Abstracts. Vol. 21.
[11] Gonçalves et al. (2021): “OGC Best Practice for Earth Observation Application Package”, OGC OGC 20-089, Candidate Technical Committee Vote Draft, Unpublished
[12] Beavis P., Conway, R. (2020), A Common Architecture Supporting Interoperability in EO Exploitation, ESA EO Phi-Week 2020, Oct. 1st 2020. https://eoepca.org/articles/esa-phi-week-2020/
[13] Zuhlke et al. (2015), “SNAP (Sentinel Application Platform) and the ESA Sentinel 3 Toolbox”, Sentinel-3 for Science Workshop, Proceedings of a workshop held 2-5 June, 2015 in Venice, Italy.
[14] ClimateData.ca Analyze page: https://climatedata.ca/analyze/
Harmony was one of the three missions that was selected as ESA Earth Explorer 10 mission candidates. After a down-selection process at the end of the Phase-0 studies, Harmony, proceeded as the only remaining candidate to a Phase-A, which will close in the summer 2022. This presentation gives an overview of the mission, reflecting its current status.
The Earth is a highly dynamic system where transport and exchanges of energy and matter are regulated by a multitude of processes. The non-linear nature of the governing physics results in couplings between processes happening at a wide range of spatial and temporal scales, with cascades of energy flowing from the larger to the smaller scales and vice-versa.
The Earth System cannot be understood or modeled without adequately accounting for small-scale processes. Indeed, the parameterisation of the unresolved, sub-grid physical processes in global or regional models remains one of the main sources of uncertainty in climate projections, in particular with respect to air-sea coupling, cryosphere and clouds
Harmony is dedicated to the observation and quantification of small-scale motion and deformation fields, primarily, at the air-sea interface (winds, waves, and surface currents), of the solid Earth (tectonic strain), and in the cryosphere (glacier flows and surface height changes).
The retrieval of kilometre and sub-kilometre scale motion vectors requires concurrent observations of its components, which will be achieved by flying two relatively light-weight satellites as companions to a Sentinel-1D, with a receive-only radar as main payload. The resulting line-of-sight diversity will be exploited in combination with repeat-pass SAR interferometry to estimate tiny deformation rates in the solid Earth, and for land ice processes. It will also be used in combination with Doppler estimation techniques for the retrieval of instantaneous ocean and sea ice surface velocities. Over oceans, geometry-diverse measurements of the radar backscatter will further allow the retrieval of surface (wind) stress and wave-spectra. The Harmony spacecraft will also carry a multi-beam thermal-infrared payload, which in the presence of clouds will allow the retrieval of height-resolved motion vectors. The combination of surface currents, surface wind-stress, along with TIR derived cloud-top height and cloud-top motion vectors will provide an unprecedented view of the MABL. In absence of clouds, the TIR payload will provide simultaneous observations of the sea surface thermal differences,
providing a unique window to look at upper-ocean processes and air-sea interactions on the small ocean scales.
The formation flying architecture of Harmony enables the unique capability to reconfigure its flight formation so that instead of being optimised for the measurement of motion vectors, it is optimised for the measurement of time-series of surface topography. This will, among other outcomes, result in a globally consistent and highly resolved view of multi-annual glacier volume changes between well defined epochs, needed to better quantify the climatic response of glaciers. At the same time, Harmony will allow studying the seasonal and sub-seasonal processes from space that play a role in such responses, for instance by measuring variations in lateral ice flow and associated elevation changes simultaneously over large areas for the first time.
In order to study upper ocean processes and air-sea interactions, Harmony will provide kilometer-scale surface roughness, root mean squared slopes, and surface kinematics, in different viewing perspectives, reflecting the imprint of Marine Atmospheric Boundary Layer (MABL) eddies on the ocean surface. This provides information about both the surface wind vector, as well as surface current vectors and swell, and, importantly, the thermal disequilibrium between air and ocean.
This will lead to a more precise understanding of small-scale (submesoscale) impacts on air–sea fluxes, especially CO$_2$ fluxes, momentum, ocean heat uptake and overall energy pathways, to reduce uncertainties for lateral dispersion of pollutants and tracers, vertical transport and nutrient pumping.
The scientific goal of Harmony for the cryosphere is to bridge existing observational gaps in order to improve our understanding of the physical processes causing the widespread shrinkage of the cryosphere. These conceptually new observations will push back the existing limits by refining the reconstructions of past and ongoing glacier changes, by improving the representation of the driving mechanisms in regional and global models of ice flow dynamics and mass balance, by describing the unresolved complex processes allowing calibration and validation of sea ice models with more realistic rheology or by improving our understanding of the permafrost dynamics.
Harmony aims at providing, for the first time, worldwide integrated measurements of elevation changes and ice flow on glaciers and ice-sheet coastal areas, as well as localised elevation measurements of icebergs and ice shelves.
A particular strength of the interferometric capabilities of Harmony that the mission is able to measure large topographic changes and lateral displacements (scale of metres and tens of metres) through repeat XTI-mode elevation models and SAR offset tracking, and, at the same time, small changes (cm-scale) thanks to the diversity of SAR lines-of-sight.
Harmony aims to provide an integrated view of the dynamic processes that shape the Earth’s surface. For the Solid Earth, the scientific goals are to improve our understanding of tectonic and magmatic processes by bridging existing observational gaps with regard to strain rates, which are currently hindered by the lack of sensitivity to the North-South deformation component, and with regard to rapid elevation changes.
Coupled atmosphere and ocean boundary layer dynamics represents a crucial component of the Earth system. Air-sea interactions span several different time scales, from shorter scale weather pattern evolution and extreme meteorological events, to inter-annual/decadal climate variability and global change. Air-sea fluxes depend on many correlated properties. For instance, the exchange of momentum is caused by the shear resulting from the atmospheric wind and many other physical quantities, e.g. the ocean velocity, the sea state and the density stratification in the lower atmosphere and in the upper ocean. Similarly, heat fluxes depend on surface stress, roughness, temperature, and vertical transport in the turbulent atmospheric and oceanic boundary layers. Those physical characteristics and processes are often correlated, through the action of complex interactions. An increased coupling between surface winds and the stronger winds aloft emerges when warm sea surface temperature destabilizes the air column, influencing entrainment at the top of the boundary layer, and surface ocean currents affect roughness as demonstrated by the covariance of mesoscale features and surface wave-energy. Furthermore, secondary flows in the ocean and atmosphere boundary layers impact vertical transport and mixing, even producing non-gradient fluxes.
Despite the importance of the small scale processes in setting the conditions at the interface between air and water, the limited capabilities of present satellite and in situ instruments do not allow to properly characterize them. Indeed, most of our understanding of the dynamics at these scales comes from high resolution numerical modeling and theoretical studies, and only sparse observational analyses have effectively been carried out. Ocean fronts and eddies identified with aerial guidance, further seeded with drifters, occasionally provided some reference ranges and target requirements for future scientific missions. Reported analyses indicate divergence and vorticity values in the upper ocean that can largely, at time and very locally, exceed 5 and even 30 times the Coriolis frequency, which represent very extreme departures from balanced dynamics, and suggest intense vertical motion induced by sub-mesoscale features. Strong sub-kilometer scale variations of winds, associated with dynamical features such as atmospheric boundary layer rolls and convective plumes, have been reported, indicating that transport in the MABL depends on processes that are not represented in climate models.
To properly characterize the ocean and atmosphere boundary layer dynamics, it is thus crucial to identify - through the imprint they leave - small scale processes at the air-sea interface. Harmony will allow the retrieval of relative surface velocities, winds, and waves at kilometer to sub-kilometer resolution in target areas of high dynamical interest, providing information on divergence and vorticity both in the ocean and in the atmosphere. Simultaneous measurements of sea surface temperature under clear sky conditions and of cloud top heights & motion otherwise will either allow to link dynamical patterns to thermodynamic features or to estimate the thickness of the atmospheric layer directly affected by air-sea processes.
The identification of the presence of large variations in winds at sub-kilometer scale during intense events, such as in tropical cyclones, will also allow to identify the presence of local anomalies in air-sea fluxes and to link them to convective updrafts through the effect they have on relative vorticity near the surface, providing crucial information for the prediction of rapid intensification events in tropical cyclone development.
Even if the relatively long revisit clearly precludes the possibility to follow individual feature evolution outside the high latitude bands, these composite data will still represent fundamental new information that could be used to validate existing and future high-resolution numerical models and also to eventually downscale the observations from lower resolution products as those obtainable through satellite altimetry and scatterometers. This can be considered a realistic target considering the growing efforts towards the application and development of artificial intelligence tools combining techniques originally thought for computer vision tasks as super-resolution.
Spatially-detailed maps of three-dimensional surface displacements and topography change, and their temporal evolution, are essential for understanding and modelling geophysical processes that trigger earthquakes, landslides and volcanic events, and for the assessment of hazards arising from these phenomena. Current SAR missions are sensitive to vertical and east-west motions, but are extremely limited in their sensitivity to north-south motion. In its bistatic configuration, the Harmony mission will deliver 3D vectors of surface motion by means of differential repeat-pass InSAR methods. In areas where displacement is predominantly north-south, the ability to systematically measure the third dimension of displacement will reveal motions that have been invisible up until now, and in other areas will enable the resolution of ambiguities in the underlying physical processes that lead to earthquakes, landslides and volcanism. Resolving strain rates in tectonic regions down to 10 nanostrain/year will be a key target for the mission, which will encompass the majority of continental regions that lead to deadly earthquakes. In its cross-track configuration, as well as still providing 3D deformation maps, the Harmony mission will deliver time series of topographic change, providing high-resolution views of the active processes that reshape the Earth's surface. A key focus will be constraining changes that occur during volcanic eruptions, such as lava flows, dome growth and pyroclastic density flow deposits.
The unique bistatic configuration of the Harmony mission will allow the retrieval of 3D deformation maps with unprecedented accuracy. Current performance and end-to-end simulation results indicate that a deformation measurement performance close to 1 mm/yr @100 km can be achieved for all three dimensions. Similarly, the single-pass digital elevation models acquired in the cross-track configuration will have a point-to-point vertical accuracy of about 1-2 m at a resolution of 30 m x 30 m with the interferometric baselines being currently considered.
Besides presenting the main goals and motivations of the Harmony mission for solid Earth, this contribution will also show dedicated simulation results obtained with the performance simulator currently being developed in the frame of the science studies of the Harmony mission.
Land ice applications are one of the primary scientific goals of the ESA EarthExplorer 10 candidate mission Harmony. Specifically, the mission aims to
(i) provide a consistent, temporally and spatially highly resolved global glacier mass balance, filling major spatial gaps in the current observation of mountain glaciers and outlet glaciers of the ice sheets;
(ii) give new insights into the physical processes associated with the coupling between glacier mass change and ice dynamics and through that substantially improve understanding and prediction of rapid or even abrupt glacier changes, and the balance between vertical ice flow and mass accumulation/ablation;
(iii) provide crucial large-area information on the spatial distribution, extent and magnitude of subsidence and erosion in permafrost areas in order to estimate permafrost degradation and its local and global impact.
To fulfil these goals, the Harmony mission design consists of two passive (receive-only) SAR satellites that fly in constellation with Sentinel-1D. The two Harmony spacecraft will perform measurements of the signals transmitted by Sentinel-1D in either single pass cross-track interferometry (XTI) or stereo formation. From two time series (each one year long, data points every 12 days) of single pass XTI acquisitions, the first series at the beginning (year 1) and the second at the end of the mission (year 4 or 5), world-wide glacier elevation changes will be computed, responding to goal (i). The same approach will be used to detect major erosional processes in ice-rich low-land permafrost, such as thaw slumps, over a period of 4-5 years (goal iii). At the higher temporal resolution of 12-days for the individual series, the interferometric XTI digital elevation models (DEMs) will serve to extract seasonal and sub-seasonal glacier elevation changes. Remarkably, these fast repeat DEMs will be simultaneous with lateral ice displacements derived from repeat Sentinel-1D data using offset tracking, enabling a major step forward for linking variations in ice flow with vertical surface elevation changes (goal ii) and in addition improving the precision of the displacement measurements through the concurrent elevation maps and multi-view redundancy. When interferometric phase coherence persists across the 12-day repeat interval of subsequent acquisitions with XTI but in particular the Harmony stereo formation, this will lead to interferometric repeat pass measurement of three-dimensional Earth surface motion and deformation. Being able to separate out the vertical ice flow component on glaciers will allow comparison with accumulation or ablation to detect mass imbalance (goal ii). Over permafrost areas, Harmony’s interferometric 3D surface motion will improve measurements of seasonal and multi-year frost heave and thaw subsidence, and help detect additional lateral components of these two processes, which so far are assumed to act mostly vertically.
The upper ocean is of intense interest. Almost all human interactions with the ocean occur in its first hundred meters in depth. Most ocean life is concentrated in the upper ocean, and is currently largely altered by modified exchanges between the atmosphere and the ocean. Air-sea interaction processes in the upper ocean are also essential factors in determining future weather and climate.
This makes the upper ocean an important arena for science which transcends the boundaries of physics, chemistry, biology, meteorology and climatology. Digital twins of the Earth systems (DTEs) shall then be key tools to provide integrated capabilities to improve local and global predictions. DTEs will then use any sort of models, data-driven and/or model-driven ones, that provide sufficiently accurate representations that are being twinned/replicated.
Ideally, where accuracy of numerical simulations would be perfect, DTEs would use model-driven simulations derived directly from physics principles. The barrier of computational costs to reach this high accuracy shall certainly not preclude these physics-based approaches. But air-sea interactions are characterized by a too large number of scales interacting over a wide range of time and spatial scales. No computer can encompass the interacting dynamics of all hydrodynamic scales involved in ocean-atmosphere interactions – ranging from the scale of the Sun’s heating (∼10,000 km) down to the turbulence dissipation scale (∼1 mm). Computers can only simulate some of the scales. The others, at the unresolved scales of motion and exchanges, must be parameterized for each type of phenomenon (wave, eddy, current, rain/slicks, ...), in terms of its effects on the resolved scales. These neglected sub-grid processes must then be properly 'calibrated' and further taken into account in order to achieve accurate energy transfers at the ocean-atmosphere interface, as well as to ensue stable numerical simulations. Some very high resolution physics-based model can likely reach the necessary high accuracy, and will certainly be used to generate sets of reliable results to refine surrogate models or metamodels to enter DTEs. Yet, we must admit that computational simulations may also reach some limiting “irreducible imprecision” compared to measured quantities at the ocean-atmosphere interface for different turbulent regimes at the ocean-atmosphere interface, e.g. very extreme events. This limiting feature explains the observed irreproducibility among different model schemes which are supposed to be solving the same problem.
In that context, especially marine-atmosphere extreme events benefit from ultra-high resolution satellite observations and have long been known to be essential to determine whether the results seen in high-resolution ocean-atmosphere coupled models are realistic. In this endeavor, the Harmony mission seeks to provide more direct observations to help quantify and calibrate fine-scale air-sea interaction processes. The Harmony mission will thus pave this new regime of high resolution observations of upper ocean and lower atmosphere dynamics to inform and drive the developments of the next generation of simulation schemes to enter DTEs. In particular, Harmony directional and combined-observations down to very high spatial resolution will deliver evidences to improve the representions of the rapidly fluctuating ocean-atmosphere components. The Harmony mission will thus further produce a new systematic capability for dealing with the changing regimes of uncertainty at the ocean-atmosphere interface.
DataCAP: Sentinel datacubes, crowdsourced street-level images and annotated benchmark datasets for the monitoring of the CAP
Recently, the Common Agricultural Policy (CAP) has undergone radical changes with respect to both the direct payments pillar (Pillar I) and the rural development pillar (Pillar II), particularly in the way they are monitored. This fast-paced transitioning will continue over the next years, shifting towards the CAP 2020+ reform, where the operating model will get progressively simplified and improved. In order to do that, big space-borne Earth Observations (EO), advanced ICT technologies and Artificial Intelligence (AI) have been and will continue to be key enablers.
The Sentinel satellite missions provide frequent optical and Synthetic Aperture Radar (SAR) images of high spatial resolution and have been extensively used for the monitoring of the CAP. Usually, in a Sentinel based CAP monitoring system, the parcel geometries from the Land Parcel Identification System (LPIS), which has connected the declared crop type label to each parcel, are integrated with the Satellite Image Time-series (SITS). Then the Sentinel SITS, the LPIS objects and labels are used to feed AI models for downstream CAP monitoring services, such as crop classification and grassland mowing detection. EO based CAP monitoring systems need to be able to process and visualize very large amounts of satellite data and, for this reason, big Earth data management technologies, such as datacubes, are immensely useful. In more detail, datacubes are arrays of four dimensions, namely the longitude, latitude, time and EO product dimensions, which enable the efficient management and the simple processing and exploitation of data.
In order to harness the massive satellite data utilizing AI and thus enable timely and effective agriculture monitoring, we are still missing a crucial component, which is the ground truth. While satellite data is widely available, associated labels and in-situ data are not, and they are now the most expensive data component to obtain. This is true not just in terms of monetary cost, but also in terms of time, manpower and the availability of the expert knowledge for annotation. As a result, in-situ data and labels are frequently the missing ingredient for i) converting raw data into training datasets for Machine Learning (ML) models, ii) evaluating their performance and iii) make manual decisions when ML is not enough. Thus far, the CAP’s Paying Agencies (PAs) are mandated to check a small percentage of the total number of farmers' declarations, either through field visits or visual inspection of very high resolution images. These procedures are non-exhaustive, time-consuming, sophisticated, and reliant on the inspector's abilities. In this regard, it is the aim of the new CAP to shift from the costly sampled-based inspections towards the wall-to-wall monitoring of the CAP, predominantly using space-borne EO. Nevertheless, EO data and EO-driven information need to be accompanied by timely in-situ observations. Typical in-situ data collection methods are expensive, time-consuming and therefore cannot provide continuous data streams. However, crowdsourced street-level images or images at the edge constitute an excellent alternative source.
In this work, we demonstrate DataCAP, a multi-level data solution for the monitoring of the CAP. DataCAP comprises Sentinel datacubes, street-level images of crops, crop type labels and annotated benchmark datasets for ML. DataCAP addresses both the AI4EO community and the CAP stakeholders, and particularly PAs. It offers easy and efficient searching, storing, pre-processing and analyzing of big EO data, but also visualization tools that combine satellite and street-level imagery for validating the AI models. Using the datacubes, one can generate Sentinel analysis ready data in any form, i.e., pixel, patch, parcel datasets, to then feed their AI models for CAP monitoring (e.g., crop classification, grassland mowing detection). Using the DataCAP’s visualization component, displaying parcel-focused Sentinel time-series and any available street-level images, data scientists and PA inspectors can verify the decisions of the crop classification and/or grassland mowing algorithms through visual inspection. Additionally, street-level images, Sentinel-1 and Sentinel-2 patches are annotated using the LPIS declarations and are offered in the form of benchmark datasets for crop classification. Using both benchmark datasets, we have developed and applied a number of Deep Learning models to classify crop types.
Currently, DataCAP is being populated with Sentinel data and street-level images coming from the Netherlands and Cyprus. The street-level images are harvested through the Mapillary API, and for the pilot in Cyprus we perform our own collection campaigns that we in turn contribute to the Mapillary database. Inspectors from the Cypriot PA have smart-phones mounted on their service cars and capture thousands of street-level images as they do their daily inspections. This way, we secure a continuous data stream of meaningful photos without any additional cost, simply by taking advantage of the existing operations of the PA.
Acknowledgement: This work has been supported by the ENVISION (No. 869366) and the CALLISTO (No. 101004152) projects, which have been funded by EU Horizon 2020 programs.
Fires are among the most significant disturbances decreasing forest biomass (Frolking et al. 2009, Pugh et al. 2019). They may cause severe ecosystem degradation and result in loss of human life, economic devastation, social disruption, and environmental deterioration (Stephenson et al. 2012). Occurrence of forest fires is also expected to increase due to the changing climate (Szpakowski & Jensen 2019, Venäläinen et al. 2020), which can, in turn, accelerate global warming through increased release of CO2 and decreased vegetation binding CO2 (Flannigan et al. 2000, Oswald 2007).
A variety of remote sensing technologies have been used for mapping and monitoring spatial, temporal, and radiometric dimensions of forest fires (Banskota et al. 2014). Optical satellite observations (e.g. Landsat, MODIS) have widely been applied for estimating fire impacts (i.e., burned area) (Lentile et al. 2006, Chuvieco et al. 2019, 2020). However, there is a need for approaches quantifying burned biomass in a comprehensive manner (Bolton et al. 2017).
Laser scanning offers an additional dimension for optical satellite missions as it generates 3D information on trees and forests. Terrestrial laser scanning (TLS) especially provides details of tree stems (Liang et al. 2014), crowns (Seidel et al. 2011, Metz. et al. 2013), branches (Pyörälä et al. 2018), as well as biomass (Calders et al. 2015). As multitemporal TLS datasets are becoming more available, it becomes possible to monitor changes in trees (Luoma et al. 2019, 2021) and forests (Yrttimaa et al. 2020).
The aim of the study is to quantify burnt forest biomass of a controlled burning carried out in boreal forests. In other words, we will develop a methodology to quantify spectral response of burned biomass from optical satellite imagery of Sentinel-2 with terrestrial laser scanning.
We investigated a study site from the Nuuksio national park in southern Finland. The size of the study site was 1.7 ha and controlled burning was carried out on June 8th, 2021. In controlled bunnings carried out in Finland, only the vegetation on the forest floor is burned. In other words, the fire is not allowed to spread to trees, but it burns grasses, twigs, and possible fuel load on the forest floor (e.g. cleared/cut suppressed trees).
A plot of 1 ha was established within the study site and TLS data were acquired twice in the summer of 2021, between June 4 and 6 as well as between June 28 and 30 (i.e. before and after the burn). The scan locations were placed in every 10 meters including altogether 100 scans. We used RIEGL VZ400i scanner that uses time of flight measurement principal and records multiple returns from each sent laser pulse. We used scan resolution of 40 mdeg (i.e. beam divergence 0.7 mrad) resulting with scan resolution of 14 mm at 20-m distance, and laser pulse repetition rate of 1.2 MHz. The scans were filtered and registered as one harmonized point cloud with RiSCAN PRO software.
The Sentinel-2 Level-2A product was used here as it includes scene classification and atmospheric corrections. Sentinel-2 Level-2A imagery from May 24 to July 3, 2021 were utilized and gradient of spectral response (i.e., normalized burn ratio, NBR) was generated for each date. NBR uses near-infrared (NIR) and shortwave-infrared (SWIR) wavelengths, and it was designed to take advantage of the different responses that disturbed and undisturbed areas will have in the NIR and SWIR spectral regions (Cohen and Goward, 2004). The NBR has showed to be related to structural component of vegetation (Epting & Verbyla 2005, Pickell et al. 2016) thus, it was utilized here as a measure for burned forest biomass of the controlled burning site.
The points of the before and after fire TLS data sets were classified into ground points and non-ground points (i.e. vegetation points) by utilizing the lasground-tool in LAStools (rapidlasso GmbH) software, and before and after digital terrain models (DTMs) were generated from a triangulation irregular network. Parameters for normalization in lasground-tool were tuned according to Ritter et al. (2017).
The difference between the before and after DTMs were studied to quantify the burned biomass on the forest floor. That was linked to the change in NBR values before and after the controlled burning. The NBR value before controlled fire was ~0.44 and it declined to 0.23 after the burn. And as we know the in the controlled burning only vegetation on the forest floor, the preliminary results indicate that also this kind of lower-level burn severity can be identified from a within-year optical satellite time series. This already brings new knowledge as previously mainly stand-replacing fires have been identified from yearly Landsat time series (White et al. 2017).
References
Banskota, A., et al. 2014. Forest monitoring using Landsat time series data: A review. Canadian Journal of Remote Sensing 40: 362-384.
Bolton, D.K., et al. 2017. Assessing variability in post-fire forest structure along gradients of productivity in the Canadian boreal using multi-source remote sensing. Journal of Biogeography 44: 1294-1305.
Calders, K., et al. 2015. Nondestructive estimates of above-grund biomass using terrestrial laser scanning. Methods in Ecology and Evolution 6:198-208.
Chuvieco, E., et al. 2019. Historical background and current developments for mapping burned area from satellite Earth observation. Remote Sensing of Environment 225: 45-64.
Chuvieco, E., et al. 2020. Satellite remote sensing contribution to wildland fire science and management. Current Forestry Reports 6: 81-96.
Cohen, W.B. and Goward, S.N., 2004. Landsat's role in ecological applications of remote sensing. Bioscience 54: 535-545.
Epting, J. & Verbyla, D. L. 2005. Landscape Level Interactions of Pre-Fire Vegetation, Burn Severity, and Post-Fire Vegetation over a 16-Year Period in Interior Alaska. Canadian Journal of Forest Research 35: 1367–1377.
Flannigan, M.D., et al. 2000. Climate change and forest fires. Science of The Toal Environment 262: 221-229.
Frolking, S., et al. 2009. Forest disturbance and recovery: A general review in the context of spaceborne remote sensing of impact on aboveground biomass and canopy structure. JGR Biogeosciences 114: G00E02.
Lentile, L.B., et al. 2006. Remote sensing techniques to assess active fire characteristics and post-fire effects. International Journal of Wildland Fire 15: 319-345.
Liang, X., et al.. 2014. Automated stem curve measurement using terrestrial laser scanning. IEEE Transactions on Geoscience and Remote Sensing 52: 1739-1748.
Luoma, V., et al. 2019. Examining changes in stem taper and volume growth with two-date 3D point clouds. Forests 10(5): 382.
Luoma, V., et al.. 2021. Revealing changes in the stem form and volume allocation in diverse boreal forests using two-date terrestrial laser scanning. Forests 12: 835.
Metz, J., et al. 2013. Crown modeling by terrestrial laser scanning as an approach to assess the effect of aboveground intra- and interspecific competition on tree growth. Forest Ecology and Management 213: 275-288.
Oswald, B.P. 2007. San Diego Declaration on Climate Change and Fire Management: Ramifications for fuels management. In: Butler, B.W., Cook, W. (comps) The fire environment, management, and policy. Conference proceedings. 26-30 March 2007, Destin, FL, USA.
Pickell, P.D., et al. 2016. Forest recovery trends derived from Landsat time series for North American boreal forests. International Journal of Remote Sensing 37: 138-149.
Pugh, T.A.M., et al. 2019. Important role of forest disturbances in the global biomass turnover and carbon sinks. Nature Geoscience 12: 730-735.
Pyörälä, J., et al. 2018. Quantitative assessment of Scots pine (Pinus sylvestris L.) Whorl structure in a forest environment using terrestrial laser scanning. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 11: 3598-3607.
Ritter, T., et al 2017. Automatic mapping of forest stands based in three-dimensional point clouds derived from terrestrial laser-scanning. Forests 8: 265.
Seidel, D., et al. 2011. Crown plasticity in mixed forests-Quantifying asymmetry as a measure of competition using terrestrial laser scanning. Forest Ecology and Management 261: 2123-2132.
Stephenson, C., et al. 2012. Estimating the economic, social and environmental impacts of wildfire in Australia. Environmental Hazards 12: 93-111.
Szpakowski, D.M. & Jensen, J.L.R. 2019. A review of the applications of remote sensing in fire ecology. Remote Sensing 11: 2638.
Venäläinen, A., et al. 2020. Climate change induces multiple risks to boreal forests and forestry in Finland: A literature review. Global Change Biology 26: 4178-4196.
White, J.C., et al. 2017. A nationwide annual characterization of 25 years of forest disturbance and recovery for Canada using Landsat time series. Remote Sensing of Environment 194: 303-321.
Yrttimaa, T., et al. 2020. Structural Changes in Boreal Forests Can Be Quantified Using Terrestrial Laser Scanning. Remote Sensing 12: 2672.
With the launch of the Sentinel 1 and 2 missions, using EO-data for the monitoring of agricultural production at the parcel level has really opened up. In parallel, evolutions in computing technology as well as the rise of artificial intelligence (AI) in the EO-domain have enabled researchers as well as the industry to exploit the wealth of information in these EO-data. One of the main bottlenecks that remain is the availability of reliable and abundant in-situ data, which is needed to convert the EO-data into meaningful information. Concrete examples are information on crop type, management practices, biomass production and yield.
There is a plethora of data available, for example through open-access publications, or initiatives from international organizations such as the Group on Earth Observations Global Agricultural Monitoring Initiative (GEOGLAM) Joint Experiment for Crop Assessment and Monitoring (JECAM) sites, the International Institute for Applied Systems Analysis (IIASA) citizen science platforms (LACO-WIKI GEO-WIKI), the Radiant MLHub, the Future Harvest (CGIAR) centers, the National Aeronautics and Space Administration Food Security and Agriculture Program (NASA Harvest) etc. However, these data are scattered over many different sources, lack standardization and/or have incomplete metadata. This hampers the re-use of these data by others, causing an inefficient use of resources, while also impacting the resulting product quality.
To address this problem, the added value of the centralized in-situ data is exploited within the e-shape project. The system under study is a combination of three components: (i) the CropObserve app, specifically designed to facilitate the easy collection of information at the parcel level; (ii) AGROSTAC for the curation and public dissemination of the data, and (iii) EO-based monitoring services that are calibrated/validated with these in-situ data. The different components are discussed in more detail below.
CROPOBSERVE
This app was initially developed by IIASA to enable the collection of crowd-sourcing information on agricultural fields by non-experts. The first component is the CropObserve app, which enables the collection of parcel-based information in the field. The information that can be provided is grouped into four categories, i.e. crop types, phenological stage, visible damages, and management activities. Each category is optional, and can be left open.
- crop type: a cascading selection menu is foreseen that allows you to provide the crop type at different levels of detail. This option was foreseen to keep the overview of crop types without cluttering the screen, as well as allow the user to provide the information with which the user is confident, as not all users are agricultural experts (e.g. for Winter Wheat, the cascading options are Cereals – Wheat – Winter Wheat).
- Phenological stage: this refers to the condition of the crops and/or field at the moment the field is visited. Only a few generic crop development stages are included, as these may vary between crop types.
- Visible damage: a number of damages are foreseen to be logged, such as drought, frost or flood.
- Management activity: this enables tracking specific activities that occur on the field that occur on a specific day. Examples are Ploughing, Planting, or Harvest. It is important to make the distinction between logging the current state (e.g. the field is harvested, but the user does not know when this happened) which should be logged as a phenological stage, and the specific action on the field (Harvest, which occurs at the specific moment/day of providing the input in the app).
All data collected through the CropObserve app is automatically made publicly available through AGROSTAC (see below)
AGROSTAC
For a centralized hosting and curation of in-situ data, WENR and VITO initiated AGROSTAC (Janssen et al., 2012). AGROSTAC collects and harmonizes georeferenced open data around key agronomy observations such as crop type, phenology, biomass, yield and leaf area. Published, open data sets are screened for key observations and selected data goes through a dedicated data curation procedure. This is a crucial step to ensure re-use of data beyond its original purpose of collection. In this procedure, meta data are checked and completed using all information available in the data files, supporting documents and associated publications. Data are converted into standard units, and phenology events are mapped to the BBCH scale. As a result, data can be offered in a FAIR manner (Findable, Accessible, Interoperable, Reproducible), and ready for use. To date AGROSTAC includes data sets from ODJAR (odjar.org), generic repositories like DataVerse (https://dataverse.harvard.edu), specific initiatives and phenology networks. All the data collected through CropObserve is ingested in AGROSTAC, where it will undergo quality checks before being published and made accessible to the public.
EO-BASED SERVICE DEVELOPMENT
The data made available in AGROSTAC, coming from CropObserve and other sources, can be an invaluable source of information to calibrate and validate EO-based monitoring services. Here the example is provided for the development of tools to monitor crop calendars, and specifically two events: Crop Emergence and Harvest.
The initial development of the methodologies was done with a limited data set, which was already available through another project. This data was used to make decisions on the methodological set-up (e.g. which AI-methods, how to go from model-outputs to exact harvest date). A detailed description of the method itself is described in Bonte et al. (2021). However, through this exercise on the integrated in-situ data flows, the transfer from a locally-trained model to a performant monitoring method could be done, by including training data with more variability in crop types, growing conditions, soil types, etc. One the one hand the robustness of the harvest detection method could be evaluated at a European scale, and on the other hand the models could be re-trained to be more performant at this scale.
Through this workflow with its integrated components, the importance of high-quality in-situ data became very clear. For many monitoring services, the limited availability of proper in-situ data over larger regions is the main bottleneck that hampers the scaling-up, and thus the operational rollout. This is to a large extent mitigated through the centralized and curated data dissemination via Agrostac, while the CropObserve app enabled the collection of the needed data over Europe. As such this work was also in support of the GEOGLAM in-situ data working group, whose goal it is to build a community of practice to openly share agricultural in situ data to promote research in operational activities for global agricultural monitoring.
REFERENCES
Bonte, K., Moshtaghi, M., Van Tricht, K., & Tits, L. (2021, July). Automated Crop Harvest Detection Algorithm Based on Synergistic Use of Optical and Radar Satellite Imagery. In 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS (pp. 5981-5984). IEEE.
Janssen, S., van Kraalingen, D., Boogaard, H., de Wit, A., Franke, J., Porter, C. and Athanasiadis, I.N., 2012. A generic data schema for crop experiment data in food security research. In Proceedings of the sixth biannial meeting of the International Environmental Modelling and Software Society (pp. 2447-2454).
Terrestrial ecosystems provide a very wide range of essential ecosystem services, but these services are under increased levels of stress due to climate change. Ecosystem structure and climate are closely-linked: changes in climate lead directly to physical changes in ecosystem structure and vice versa. To improve our understanding of this relation and ecosystem resilience, we require information of the fine-scale structural heterogeneity between individual trees. This will be key for effective forest management and to support climate mitigation actions appropriately. Drought experiments, which simulate drought conditions by excluding rainfall from a given zone, bring highly valuable information on the mechanisms involved in the response of ecosystems to drought (Bonal et al., 2016; Tng et al., 2018; Meir et al., 2018). Novel techniques using 3D laser scanning (LiDAR – Light Detection and Ranging) can provide us with a new way to estimate the structure of individual trees. Terrestrial laser scanning (TLS) can measure the canopy structure in 3D with high detail, and several algorithms have recently been developed to produce full 3D models of trees down to fine (cm) scale. Terryn et al. (2020) calculated 17 different structural tree metrics in the context of tree species identification. Within this study we explore similar structural metrics to determine the long -and short term affect of water availability on tropical tree structure. The long term effect is reflected through three wet tropical rainforest sites on a rainfall gradient (2000-4000 mm/y). The short term effect will be investigated through an induced drought experiment which has been scanned after a time-span of three years with TLS. For this purpose we will extract about 130 individual trees from the TLS data of the three tropical forest plots in Australia. Quantitative Structural Models (QSMs) will be built with TreeQSM to obtain the individual tree structures. Tree structure from the three sites will be compared to assess the long-term adaptation. The structure of the control trees will also be compared with the tree structure of the drought induced trees using structural metrics obtained from the QSMs.
Introduction
We present an overview of the backgrounds and recent developments of the EuroCrops dataset [3] and its crop taxonomy, denoted as the Hierarchical Crop and Agriculture Taxonomy version 2 (HCATv2), alongside an exploration of its potential use-cases and possibilities.
Background
Within recent years, an increasing number of member states of the European Union has decided to release data they initially collected in the scope of the agricultural subsidy control. This data, containing the crop type together with the corresponding geo-referenced field parcel polygon, can be used as reference data for various applications, such as vegetation analysis and crop classification with satellite imagery, biodiversity monitoring and the impact of climate change on crop harvest. So far, most publications focus on only a small fraction of the available data due to the exponentially increasing effort to homogenise the data at the transnational level [2, 5].
General Problems
Firstly, the collection of the data itself proves to be more laborious than widely assumed. Even though many countries decide to release their data to the public, the means of distribution platforms differ widely, ranging from the commonly known INSPIRE platform (inspire.ec.europa.eu) to websites that are only available in the national language. Some countries do not even offer an option to download the data, but are willing to send them to those who request it. Therefore, we connected with official GIS and agricultural authorities from European Union member states to ensure an accurate collection of available crop datasets.
Secondly, the majority of subsidy control data we obtained comes with country-specific crop identifiers and class names in the respective national language. An automatic translation into English does not suffice to grasp the complexity of these classes and the former lack of a common ground that captures all country-specific taxonomies made it challenging to make use of the data.
Main Results
With EuroCrops and the updated corresponding taxonomy HCATv2, we propose an approach to make data from a variety of countries available and comparable.
EuroCrops
As already introduced earlier [3, 4], we are compiling a dataset that includes all publicly available crop reference data from the European Union. By November 2021, we have gathered data from sixteen different countries, i.e., Austria, Belgium, Germany, Denmark, Estonia, Spain, France, Croatia, Lithuania, Latvia, the Netherlands, Portugal, Romania, Sweden, Slovenia and Slovakia. The years for which we have received data from the aforementioned countries, alongside the data itself, can be found on our maintained website www.eurocrops.tum.de. As explained before, collecting the data is not sufficient and, therefore, we developed a new taxonomy into which we fit the data gathered within the past year.
HCATv2
The Hierarchical Crop and Agriculture Taxonomy (HCAT), as presented earlier [3], is derived from the Copernicus EAGLE matrix [1]. Efforts to develop the taxonomy are still ongoing, as each newly obtained dataset must be manually fit, with the complete HCATv2 expected to be released within 2022. Compared to its predecessor, HCATv2 includes five times more classes, organised across six levels and will eventually aim to capture most crops types cultivated within the European Union.
Context
With EuroCrops and HCATv2, we hope to give researchers the opportunity to develop models that generalise better to unseen data, while being able to take into account as many classes as desired. In addition, by keeping most information of the original datasets and publishing the mappings of the national crop classes to HCATv2, we want to encourage authorities, industry, and academia to make use of and contribute to a harmonised European crop taxonomy.
Perspective
We anticipate that the development and publication of our dataset will persuade more countries and districts to publish their crop data, leading to a European transnational dataset.
References
[1] S. Arnold, B. Kosztra, G. Banko, G. Smith, G. Hazeu, M. Bock, and N. V. Sanz, “The EAGLE concept – a vision of a future European land monitoring framework,” in EARSeL Symposium proceedings, "Towards Horizon 2020", 2013, pp. 551–568.
[2] M. Rußwurm, C. Pelletier, M. Zollner, S. Lefèvre, and M. Körner, “BreizhCrops: A Time Series Dataset for Crop Type Mapping,” in ISPRS – International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XLIII- B2-2020, 2020, pp. 1545–1551. DOI : 10.5194/isprs-archives-XLIII-B2-2020-1545-2020.
[3] M. Schneider, A. Broszeit, and M. Körner, “EuroCrops: A pan-European dataset for time series crop type classification,” in Proceedings of the Conference on Big Data from Space (BiDS), P. Soille, S. Loekken, and S. Albani, Eds., Publications Office of the European Union, 2021. DOI : 10.2760/125905.
[4] M. Schneider and M. Körner, TinyEuroCrops, Dataset, Technical University of Munich (TUM), 2021. DOI: 10.14459/2021MP1615987.
[5] M. O. Turkoglu, S. D’Aronco, G. Perich, F. Liebisch, C. Streit, K. Schindler, and J. D. Wegner, “Crop mapping from image time series: Deep learning with multi-scale label hierarchies,” Remote Sensing of Environment, vol. 264, p. 112 603, 2021. DOI: 10.1016/j.rse.2021.112603.
Background and aim: Seasonal carbon fluxes over Amazonia are poorly resolved both spatially and temporally. This limits our ability to predict how climate change affects this globally important carbon sink. One reason for this is the lack of quantitative data on patterns of leaf phenology (leaf flushing, leaf shedding, leaf lifespan etc). Seasonal patterns of vegetation indices derived from spectral imagery suffer from a suite of confounding factors such as seasonally varying aerosol contamination, water vapor content, cloud cover and, sun-sensor geometry effects. As a result, studies on Amazon forest seasonality based on satellite imagery disagree on whether there is more greenness in the dry season than in the wet season. Furthermore, the relative contribution of leaf age and leaf area to the canopy greenness is difficult to assess from passive remote sensing alone. Both components may play a role in determining the seasonality of the water and carbon exchanges with the atmosphere.
There is therefore a need for characterizing temporal patterns of LAI and greenness separately. This has to be done at a scale large enough to capture the inter-species variability and also to cover an area commensurate with the pixel size of low spatial resolution/high temporal frequency satellite imagery such as MODIS (Moderate Resolution Imaging Spectroradiometer) or geostationary ABI (Advanced Baseline Imager). Such data may help resolve discrepancies between observed and modeled gas exchange at flux tower sites from Eddy covariance measurements when temporal variation in LAI and potential carboxylation rate per unit leaf area are neglected.
Material and methods: Year-round UAV acquisitions were conducted from October 2020 to November 2021 with RGB, multispectral and LiDAR sensors within the Guyaflux tower footprint at Paracou Field Station, French Guiana. LiDAR scanning data were acquired using a Riegl Minivux UAV1 sensor encased within the Yellowscan VX20 system, along with an Applanix 20 IMU, with post-processing kinematics differential GNSS. The miniature LIDAR was calibrated against a more sensitive full waveform LIDAR (Riegl LSQ780) and transmittance estimates were validated against independent light sensor measurements.
Two overlapping regions of interest (ROI) were defined and were scanned every two to three weeks (21 sampling dates). A 7ha-core area contributing an estimated 25% to the gas exchanges measured at the flux tower was covered at high density (>200 points.m-2, median density 340 points.m-2). A larger area (32ha) contributing an estimated 60% was covered at lower density (> 50 points.m-2, median 70 points.m-2).
We used the open-source AMAPvox software (http://www.amapvox.org) to analyse laser signal extinction in the canopy. The scene was voxelized at 1m resolution. In each voxel, local extinction rate was computed from incoming and outgoing laser pulses. Plant Area Density (PAD) was then derived from the extinction rate using the Beer-Lambert turbid medium approximation.
Time series of 3D maps of PAD were produced for both ROIs (7 and 32 ha).
50 light sensors (two above the canopy and 48 at ground level) were also deployed to measure light transmittance over the same one-year period every half hour.
Results
The first year of data collection has just been completed and data analysis is ongoing. We will show results that give insight on the following topics:
Seasonal variation in LAI: We are generating 1m2-Plant Area Index (PAI) maps by summing PAD vertically at each date over both ROIs. The timing and amplitude of the seasonal variation in PAI will be evaluated at various resolutions: entire ROIs, per ha, per 50x50m quadrat and per individual tree crown. Seasonal patterns of LAI at ground sensor locations will be compared to light measurements for checking the consistency between data sets.
Spatial variability in LAI dynamics: PAI dynamics at fine spatial resolution will be correlated with maps of water availability derived from the Topographic Wetness Index.
Asynchrony between upper and lower canopy: We shall use the dense data set (7 ha ROI) which provides a better sampling rate of the lower canopy, to compute PAI per stratum (above and below 20m height above ground level). We shall test if variation in PAI is negatively correlated across upper and lower strata (as previously noted to be the case in some areas of the Amazon) and, if so, at what spatial resolution.
The Infra-Red Sounder (IRS) instrument is the primary payload of the Meteosat Third Generation Sounder satellite (MTG-S). The main objective of the MTG sounding mission is to enhance Numerical Weather Prediction (NWP) capabilities at regional and global scales, through the provision of Atmospheric Motion Vectors (AMV) with higher vertical resolution and frequent information on temperature and water vapour profiles. Additionally, layer-by-layer analysis of the atmosphere will offer greater insight into its complex chemical composition and support atmospheric gas tracing applications, such as air quality and pollution monitoring. The MTG-S satellite and the IRS instrument are being developed under the responsibility of OHB System (Germany), with TAS (France) as MTG’s mission prime contractor and ESA/EUMETSAT as end customers.
The IRS will be the first European hyperspectral sounding instrument in geostationary orbit and will be capable of scanning a full Earth’s disc (over Europe and Africa) every hour with a spatial resolution of 4 x 4 km² at nadir, covering roughly 640 x 640 km² per stare every 10 seconds. The design of the instrument is based on an imaging Fourier Transform Infrared Spectrometer (FTIR) and will deliver hyperspectral sounding information in two infrared bands: LWIR (700 - 1210 cm-1) and MWIR (1600 - 2175 cm-1), with a spectral channel interval of 0.625 cm-1. The instrument incorporates two IR detectors of 160 x 160 pixels each, which leads to a total of 51,200 delivered interferograms per stare. The recorded interferograms will undergo pre-correction and data compression before being sent to ground. Once the data is received on-ground, the Level 1 ground processing algorithms will convert the measured interferograms into radiometrically and spectrally corrected spectra.
Applications benefiting from the Level 1 science data provided by the IRS instrument are manifold and await to be fully explored. For example, by delivering frequent four-dimensional information on humidity, temperature, and wind profiles, the IRS will significantly enhance regional and global NWP, thus improving early detection (and warning) of rapidly developing atmospheric instability like severe convective storms. The spectral range of the IRS will also allow to estimate and monitor the concentration of atmospheric trace gases like ozone and carbon monoxide, leading to enhanced information for air pollution forecasting. Moreover, through information on the composition and density of volcanic ash clouds, ash fallout prediction models will be refined.
The first flight model of the instrument (IRS PFM) is currently in its AIT phase that will culminate with an optical performance test in vacuum starting in Q2 2022. The instrument verification follows a proto-flight approach, including tests successfully carried out on several development models (i.e. STM, Flat-EM and Core Spectrometer) and the achievement of the final qualification on the PFM. The first MTG-S satellite launch is planned by end 2023/early 2024.
This paper will provide a detailed overview of:
• the objectives and capabilities of the MTG sounding mission
• the design and development status of the IRS instrument
• the Level 1 science data provided by the IRS instrument and its expected performance
The Copernicus missions Sentinel-4 (S4) and Sentinel-5 (S5) will carry out atmospheric composition observations on an operational long-term basis to serve the needs of the Copernicus Atmosphere Monitoring Service (CAMS) and the Copernicus Climate Change Service (C3S).
Building on the heritage from instruments such as GOME, SCIAMACHY, GOME-2, and OMI, S4 and S5 are imaging spectrometer instruments covering wide spectral bands in the UV, visible, near infrared, and (S5 only) the short wave infrared domain. S4 will monitor key air quality parameters with a pronounced temporal variability by observing NO2, O3, SO2, HCHO, CHOCHO, and aerosols over Europe with an hourly revisit time. In addition to the S4 target species, S5 will observe CO, CH4, and stratospheric O3. S5 will provide composition data with global coverage on a daily basis serving climate, air quality, and ozone/surface UV applications.
A series of two S4 instruments will be embarked on the geostationary Meteosat Third Generation-Sounder (MTG-S) satellites. S4 establishes the European component of a constellation of geostationary instruments with a strong air quality focus, together with the NASA mission TEMPO and the Korean mission GEMS.
The presentation will provide an overview of the S4 and S5 missions and the related science and applications.
EUMETSAT has provided the user community with more than three decades worth of satellite data, starting with the geostationary missions of the Meteosat First Generation, and since 2002 with the Meteosat Second Generation (MSG) series satellites.
EUMETSAT is currently developing the future geostationary program, the Meteosat Third Generation (MTG). The MTG system will host a more advanced 16-channel VIS/IR Flexible Combined Imager (FCI) as well as a Lightning Imager (LI) on its geostationary imaging platform (MTG-I), whereas the sounding platform (MTG-S) will host the MTG InfraRed Sounder (IRS) and the Copernicus Sentinel-4 ultraviolet/near-infrared (UVN) sounding missions. The launch of the first two satellites MTG-I1 and MTG-S1 hosting the imaging and sounding instruments is foreseen in late 2022 and in early 2024, respectively.
The presentation will give an overview of the MTG system design, the system architecture, its observation missions, and some of the main improvements over Meteosat Second Generation (MSG) in terms of new missions and expected product performance.
More specifically, a brief status update will be provided on the Telemetry, Tracking & Commanding Facility (TTCF) and the Mission Operations Facility (MOF). Regarding data processing, the presentation will discuss the status up to launch and commissioning of the Instrument Data Processing Facility for both MTG-I and MTG-S (IDPF-I and IDPF-S, respectively) which process the mission data from Level 0 to Level 1, as well as the Level-2 Processing Facility (L2PF). The L2PF is also split in MTG-I and MTG-S parts in functionalities and deliveries.
Regarding the MTG system overall activities, the operations preparations are well underway and the so called system freeze is expected in June 2022. The stability of the LEOP Ground Segment has already been demonstrated, with solid operational knowledge of the LEOP contractor.
System Integration Verification Validation (IVV) are well underway, with System Version V1.0 completed. Operational Scenario Validation has started in December 2021. As a highlight of the system validation activities, it can be mentioned the successful full system end-to-end test (4 runs of 41 hours) involving Satellite Application Facilities (SAF) which was completed already in July 2021.
In this study, the assessment of the combined wind and wave energy potential is presented for locations in the North Atlantic, that is characterized by high energy swells generated by remote westerly wind systems as a consequence of extratropical cyclones (Ponce de León and Bettencourt 2021), and in the Mediterranean Sea where extreme waves are common. The objective is investigating the feasibility of satellite altimetry-based assessments of combined wind/wave renewable energy potential in the European shelf, taking advantage of the increased time and spatial coverage of the satellite altimetry constellation, composed of 10 past and present altimetry missions compiled in the ESA Sea State Climate Change Initiative data base (Dodet et al., 2020).
The method consists of using the homogenized multi-mission altimeter data, to estimate site wind and wave power densities. We use the empirical model of Gommenginger et al. (2003) to estimate the energy period, required for the computation of the wave power density from the altimeter Ku band significant wave height and radar backscatter coefficient. Waves buoys were used to validate the method.
Using Atlantic and Mediterranean sites as comparators for wind/wave correlation (Fusco et al., 2010) we show that wind/wave energy are relatively correlated in the Mediterranean, but not in the North Atlantic, which has implications for the efficient combination of renewable energy sources to make renewable energy supply more resilient. In particular, the co-location of wind and wave farms only has a strong rationale in locations where wind and wave resources are relatively uncorrelated. To date, while significant effort has gone into individually mapping wind and wave resources, little attention has focused on their temporal correlation. With a drive to 100% renewable energy, it is important that complementarity between individual renewable (including marine) energy resources, so that the most resilient forms of renewable energy, and combinations of renewable sources, are developed.
References
Dodet, G., Piolle, J.-F., Quilfen, Y., Abdalla, S., Accensi, M., Ardhuin, F., Ash, E., Bidlot, J.-R., Gommenginger, C., Marechal, G., Passaro, M., Quartly, G., Stopa, J., Timmermans, B., Young, I., Cipollini, P., Donlon, C., 2020. The Sea State CCI dataset v1: towards a sea state climate data record based on satellite observations. Earth System Science Data 12, 1929–1951, https://doi.org/10.5194/essd-12-1929-2020
Fusco F., Nolan G., Ringwood J., 2010. Variability reduction through optimal combination of wind/waves resources – An Irish case study. Energy 35, 310-325, https://doi.org/10.1016/j.energy.2009.09.023
Gommenginger, C.P., Srokosz, M.A., Challenor, P.G., Cotton P.D., 2003. Measuring ocean wave period with satellite altimeters: a simple empirical model. Geophysical Research Letters VOL. 30, NO. 22, 2150, https://doi.org/10.1029/2003GL017743
Ponce de León S., Bettencourt J.H., 2021. Composite analysis of North Atlantic extra-tropical cyclone waves from satellite altimetry observations, Advances in Space Research 68 762-772, https://doi.org/10.1016/j.asr.2019.07.021
Abdalmenem Owda, Merete Badger
1-0 Introduction
Synthetic Aperture Radar (SAR) has been increasingly used in wide range applications ranging from climatic changes, monitoring natural phenomena and hazards, change detection, pollution up to maritime applications. Huge number of daily observations become available for users around the world on a free, full, and open basis thanks to Copernicus services. SAR systems are unique remote sensing instruments; they provide high spatial resolution data, operate day-and-night regardless of cloud coverage and weather conditions.
With the advent of SAR data from the current SAR constellations, a paradigm shift occurred in many maritime applications and especially in offshore wind energy applications. SAR data have been used studying wind wakes at far regions of the offshore wind farms (OWFs), wind energy and resource assessment, development and planning of new OWFs. This study aims to characterize the physical structure of the far-wind wake based on the OWF’ capacity.
2-0 Wind speed retrieval from SAR data and wind wake analysis
SAR sensors overpass and illuminate the ocean surfaces with their own illumination system. The recorded echoes of the backscattered signals, called the normalized radar cross section (NRCS), can be related to the wind speed using a Geophysical model function (GMF). The function relates the sea surface roughness, radar incidence angle, relative wind direction to wind speed measurements by inversion of the model itself. In this study, CMOD5.n function is used to retrieve the values of wind. The function works well for the neutral stable condition.
where is the NRCS and is the angle between wind direction and scatterometer azimuth look angle (both measured from the North). The other coefficients shape the terms , is the sea surface wind (SSW) at 10 m.s.l and is the incidence angle [1].
The far-wind wake refers to the velocity deficit at downstream side of the OWFs due to wind turbine operation. The velocity deficit is computed by subtracting the mean wind velocity at upstream from the mean velocity at downstream side.
3-0 Study Area and selection criteria
The northern European sea has been enriched with the global offshore wind farms. It has about 5566 wind turbines extracting about 26 MW for 5 European countries (UK, Germany, Netherlands, Denmark, Belgium), according to the mid-year report 2021 [2]. This study will take OWFs with different capacities ranging between 200 and 1000 MW. Table 1 shows the information of the 4 selected OWFs.
Table 1: the selected OWFs for the study with their characteristics.
* We take the commissioning date of the last installed OWF in the cluster which refers to Meerwind Süd/Ost .
We have setup the following criteria before processing of the selected OWFs:
1)Only the winds blow from the land are considered for this study with width sector angle 90.
2)The wind direction sector angle is 90.
3) Only SAR wind speed scenes between cut-in and cut-out speed were taken in the study, since the wind wakes appeared well within this range of speed.
4)All scenes after the commissioned date of the selected OWFs were considered in the processing.
4-0 Preliminary and Expected results
Iinvestigate the ability of SAR to monitor the impact of the OWFs’ capacities on the wind wakes’ characteristics (area, length, maximum wind deficit, wind recovery length, e.g.), the loss on power production at far wake regions, and provide useful information for OWF developer. It is expected that the physical characteristics of wind wakes will be proportional with the capacity of OWFs. Figure 1a, shows the deficit areas at the far-wake region of the Nordsee cluster, it is obvious that the close background area of the cluster experienced the highest wind deficit (more than 5%) along a distance about 20 km. Additionally, about 2 % deficit were observed in the adjacent areas of the main deficit, afterward, the wind deficit percentile decreased which means the wind speed started to recover after 20 km away from the outer edge of the cluster. For the Butendiek OWF (figure 1b), the smallest capacity in the study, the situation was completely different in terms of the magnitudes of the physical wind wakes’ characteristic, smaller deficit areas and shorter wind wakes (less than 10km) were measured. Lastly, East Angila ONE‘s capacity is little bit than Nordesee cluster, it shows considerable amount of deficit ,but in term of magnitudes, the deficit area is smaller than what we observed in Nordsee cluster.
Figure 1: mean velocity deficit for all Sentinel scenes after the assigned commissioning date of the cluster in table 1. a, b, and c refers to Nordsee cluster, Butendiek and East Angila ONE, respectively. The percentiles refer to the average velocity deficit inside the polygon. The black arrows refer to the wind direction sector angle.
Any new installation for any future OWFs close to these processed OWFs and clusters will face the severe consequences of wind wakes of the neighbor OWFs. In this study, we are going to process the OWFs, the relationships and the magnitude of power loss for any future OWF close to these selected OWFs.
5-0 References
[1] H. Hersbach, A. Stoffelen, and S. De Haan, “An improved C-band scatterometer ocean geophysical model function: CMOD5,” J. Geophys. Res. Ocean., vol. 112, no. 3, pp. 1–18, 2007, doi: 10.1029/2006JC003743.
[2] Offshore wind energy 2021 mid-year statstics, accessed online at 10/12/2021 14:20 am, https://windeurope.org/intelligence-platform/product/offshore-wind-energy-2021-mid-year-statistics/
Atmospheric aerosols have the highest impact on surface solar irradiance under cloudless conditions. Aerosols scatter and absorb solar irradiance, resulting in an attenuation of the direct normal irradiance (DNI) and an increase in the diffused horizontal irradiance (DHI). Ultimately this causes the total global horizontal irradiance (GHI) at the ground surface to reduce.
The pre-monsoon period in the arid and semi-arid regions of the Indian subcontinent is conducive to dust storm events due to the intense surface heating and steep atmospheric pressure gradients in summer. This study analyzes the degradation in accuracy of satellite-retrieved GHI under situations of high atmospheric dust aerosol content.
The Heliosat method [1] is used to derive cloud index (CI) maps from Meteosat-8 visible channel images for a period in June 2018 during which heavy dust storms were reported in Northern India. Two sources of clear sky data are utilized for transforming CI to GHI. They are the Dumortier model [2] with climatological turbidity values and the McClear method [3] within CAMS with measured or modelled aerosol optical depth (AOD) values. The satellite estimates are validated against ground measured GHI from a BSRN station. Measurement of particulate matter from a nearby air quality monitoring station is used to analyze the dust AOD modelled by CAMS. The results show that there is a large under-estimation of satellite retrieved GHI derived using the McClear model, while the GHI derived with the Dumortier model shows an over-estimation.
References:
1. Rigollier, C., Lefèvre, M., Wald, L., 2004. The method Heliosat-2 for deriving shortwave solar radiation from satellite images. Solar Energy 77(2) 159-169.
2. Dumortier, D., 1995. Modelling global and diffuse horizontal irradiances under cloudless skies with different turbidities. Daylight II, jou2-ct92-0144, final report vol, 2.
3. Lefèvre, M., Oumbe, A., Blanc, P., Espinar, B., Gschwind, B., Qu, Z., Wald, L., Schroedter-Homscheidt, M., Hoyer-Klick, C., Arola, A., Benedetti, A., Kaiser, J.W., Morcrette, J.-J., 2013. McClear: a new model estimating downwelling solar radiation at ground level in clear-sky conditions. Atmospheric Measurement Techniques 6 2403-2418
The modern mining industry is of considerable importance to the global economy as it delivers a great range of mineral products for industry and house-hold consumers. The consequence of the considerable significance of the mining and mineral processing industry is not only a great diversity of provided mineral resources but also a massive amount of wastes generated. In fact, the mining sector is one of the largest, if not the largest waste producer on Earth (~25-50 Gt per year). The wastes of the mineral industry are generally useless at the time of production, yet they can still be rich in resource ingredients. Unfavourable economics, inefficient processing, technological limitations or mineralogical factors may not have permitted the entire extraction of resource ingredients at the time of mining and mineral processing. In the past, inefficient mineral processing techniques and poor metal recoveries produced wastes with relatively high metal concentrations. Thus, old tailings and waste rock piles that were considered worthless years ago are now “re-mined”, feeding modern mining operations (e.g. gold tailing piles in South Africa). Feasibility studies on the possible re-processing of such discarded materials require detailed site information. To date such feasibility studies commonly rely on time-consuming and costly ground surveys.
The AuBeSa project ("Automated identification, measurement and mineralogic classification of mining heaps and tailings ponds via satellite remote sensing") aims to create a database containing the location, material volume and mineralogical composition of mining heaps and tailings ponds using satellite remote sensing data and artificial intelligence (AI) computing systems. During the project, AI algorithms will be programmed and trained based on satellite images to identify mining locations and collect aforementioned wastes properties. To achieve this, the following activities are pursued:
1. Training of a machine learning (ML) model that extracts mining tailings and heaps from Sentinel-2 satellite images and classifies them automatically;
2. Volume estimation of the identified objects via topographic satellite data;
3. Mineralogical and chemical analysis of the waste material based on hyperspectral PRISMA-satellite data.
The methodology has been initially developed for mine sites in arid and semi-arid areas of Chile and Peru, because these areas allow an easier analysis of the land surface area and have existing ground-truth data sets. In future, other mines sites in areas of different climate and vegetation zones as well as unknown waste characteristics will be investigated. The acquired database is to provide information on waste dumps and tailings heaps to the mining sector so that well-informed decisions can be made on the possible extraction of remaining resources from these materials. It is expected that automated detection, surveying and classification of mine wastes using satellite remote sensing and AI systems will (a) allow faster decisions on the likely resource potential of individual waste repositories, and (b) reduce the dependence on ground surveys.
Acknowledgements:
This work was supported by the German Federal Ministry of Education and Research and is part of the AuBeSa project (grant number 01|S20083B).
Net-zero commitments require a transition to cleaner transport and renewable energy storage, but this poses many challenges for energy and mineral supply chains. Low-carbon technology is intensive from a mineral perspective. If our planet is to remain within the COP21 Paris Agreement commitment of a global average temperature increase of well below 2°C, the World Bank estimates that more than three billion tons of minerals and metals will be needed to deploy wind, solar and geothermal power, as well as energy storage [1]. Against this background, the demand for battery metals, such a lithium and cobalt, is set to reach 500% of current production levels by 2050 [1]. Mineral supply will be critical in determining the speed and scale at which green energy technologies can lower greenhouse gas emissions and enable climate-resilient development. At the forefront of green technologies are electric vehicles, where Li-ion rechargeable batteries are a fundamental component. Lithium is also used for the production of other batteries, such as cell phones and laptops. The battery market currently takes the 71% of global lithium production [2]. Worldwide lithium production rose more than 300% from 25300 tons in 2010 to 82000 tons in 2020 [2,3].
Within this context, the search and extraction of lithium is becoming an important revenue source for many world economies. In particular, the U.S. Geological Survey [2] estimates that the first three producers lie within the South American ‘Lithium Triangle’, i.e. Bolivia, with 21 million tons resources, Argentina, with 19.3 Mt, and Chile, with 9.6 Mt. Security of supply of lithium to the global markets and increasing expectations by consumers for responsibly sourced raw materials result in a growing global interest in lithium resources and their extraction.
This study uses the largest salt flat in the world, the Salar de Uyuni in Bolivia, as a test site to develop a repeatable and seamless workflow to track lithium from its source in the watershed to the salar nucleus at its highest concentration. It provides a systems-based understanding of aspects of lithium-brine deposit genesis that can contribute to broader considerations on the reporting of Li resources, such as assigning uncertainty bounds to resource estimates. For this study, open source Earth observation (EO) data is analysed to support geological and hydrological research. We explore the potentials of EO data for several research aspects, such as (1) Jointing: it may influence fracture-flow of groundwater and also be significant in terms of surface-area for water-rock interaction, i.e. potentially increasing the "leaching" rates of Li from the bedrock into the water; (2) Weathering: the degree and style of weathering may influence the liberation of Li from rocks into the water; (3) Distribution of clays: the distribution of clays that may restrict the liberation of Li from weathered rock, or may scavenge Li from passing water; (4) Water and moisture: the distribution of water-bodies and sources, including active streams, springs etc. We are building a groundwater recharge model having as input soil moisture content; (5) Geological structure: the presence of neotectonic faults that may disrupt the salar, as well as structures that may provide pathways for the flow of fluids; (6) Lithological mapping and classification: possible refinement of existing geological maps.
In conclusion, we found that by constructing a flexible and repeatable workflow the question how does lithium reach the salar de Uyuni can be addressed. This workflow will support the sustainable management of lithium in the region. Moreover, the provision of "fit for purpose" systems of tracking Li helps in filling gaps in existing methods to enable Li brines resources to be correctly reported.
[1] Kirsten Hund, Daniele La Porta, Thao P. Fabregas, Tim Laing, John Drexhage. Minerals for Climate Action: The Mineral Intensity of the Clean Energy Transition. World Bank Group Report; 2020.
[2] USGS. Lithium: Mineral Commodity Summary. 2021.
[3] USGS. Lithium: Mineral Commodity Summary. 2011.
The modern world is built upon a network of complex, globe-spanning supply chains, where critical natural resources are extracted, processed, and transported with a bare minimum of oversight. Mining of metal ores, mineral and fuel sources is fundamental to both current and future green economies but requires vast earthworks as well as intensive processing and refinement operations. A major agreement from the COP26 summit, the Glasgow Climate Pact, stipulates a “phasing down” of coal rather than a complete phase out. The question follows: how does this translate to real action to mitigate the worst polluting hydrocarbon energy resource? How many countries will continue to use coal as part of their energy mix without the technology or monitoring infrastructure to verify its continued environmental impact? Terrabotics provides easy to understand metrics, enhanced by AI, to collect, organise and make sense of the gamut of geospatial data on coal mine operation, allowing decision makers, regulators and operators to assess the environmental impact of coal mine sites objectively and reliably. Observing vast coal mining operations is highly suited to satellite earth observation owing to its wide spatial coverage, weekly-to-daily revisit times and much lower relative cost compared to ground, drone or aerial surveys. Using an array of optical, thermal and radar sensors aboard satellites, we have blended multi-sensor data analytics from this rich, ever improving data source to create a catalogue of key leading indicators pertaining to site operations, production capacity and ESG metrics.
We present a study of two major coal mines in the Unites States and Kazakhstan using Terrabotics’ Mine Monitoring platform (Minotor™) and data streams from the Energy SCOUT™ product portfolio. Using time series optical satellite data, we provide an overview of a site’s operation with change detection algorithms highlighting areas of increased activity or new construction. Thermal and emissions data help identify step changes in mining operations, the intensity of ground vehicular movements, activation of processing facilities and outgassing from extraction. Radar imagery time series detects the location of large vehicles, mining equipment and any other changes in infrastructure across the entire mine site.
Together, we have converted the numerous data feeds from independent satellite sources into valuable integrated metrics and actionable intelligence that allows operators, stakeholders, industry observers access to critical ESG performance and production data. Our goal at Terrabotics is to shine a light on critical but opaque natural resource supply chains, with the Minotor™ EO analytics platform providing real time intelligence and ESG/production forecasting. As we transition to economies free from fossil fuels, we must also meet the demands of a greener economy based on battery technologies that will inevitably require intensive mineral processing and extraction. Without comprehensive, cost-effective routine monitoring solutions of the world’s largest and most important mining facilities, we will fail to build a sustainable economy or succeed in a phased transition away from fossil fuels while managing their lasting impact on the environment.
The German sustainability strategy uses three indicators to measure the loss of land resources due to urbanization: The daily rate of urban expansion, the loss of open space, and the density of settlements. The quantitative objective is to reduce land take from a rate of 52 hectares per day in 2019 to less than 30 hectares per day by 2030, and further to zero by 2050. The data source used to measure the indicators are land use statistics aggregated from the cadastral survey. However, changes in the nomenclature of this dataset have negatively affected the reliability of the time series since the introduction of the ALKIS cadastral system in Germany from 2015 onwards. Alternatives on a European level such as Urban Atlas, Imperviousness and CLC land cover maps are frequently used for monitoring urban land changes instead. But these datasets also come with limitations including a low update frequency (every 3 or 6 years), coarse mapping units, and the inability to capture land use characteristics that match the intentions of policy objectives for urban land use change. They are therefore not fully suitable to monitor the annual land take in line with sustainable land use objectives.
On this background, the incora project [in German: Inwertsetzung von Copernicus Daten für die Raumbeobachtung / Adding value to Copernicus data for spatial monitoring] conducted by ILS - Research Institute for Regional and Urban Development gGmbH in cooperation with the BBSR - Federal Institute for Research on Building, Urban Affairs and Spatial Development and mundialis GmbH explores the potential of Earth observation techniques to support land use monitoring progress. Based on Sentinel-2 satellite images, our project applied machine learning, automatic training data generation and sub-pixel analysis techniques to produce annual Germany-wide land cover and imperviousness maps. A dedicated companion poster covers in detail the Sentinel-2 data processing steps while this contribution presents the overall project and its results.
The mapping products and the following change detection results were used to quantitatively measure urban land, open space, as well as their changes. By integration of auxiliary data, our project further calculated the annual urban land take, annual loss of open spaces, annual change of settlement density, structural characteristics as well as other indicators that are of interest to stakeholders. The project results demonstrate the added value of Copernicus data for support sustainable land use, including firstly, the derived products are spatially continuous and highly consistent in time series, which provides a solid base to study urban land dynamics at the national, regional and municipality levels. Secondly, it provides not only quantitative measurement but also the spatial distribution of urban land changes, which can be used for the analysis of urban sprawl and land fragmentation.
Expert feedback emphasizes that the monitoring of sustainable land use targets requires high validity of mapping results. We thus highlight our imperviousness mapping, an innovative subpixel analysis method, which improves the measurement accuracy in annual land take (see attached figure). The key advantages of this method are that firstly, it deals with the mixed pixel problem which often causes misclassification in the urban area. Secondly, the change layer contains less noise caused by the random changes in atmospheric conditions, sun angle, soil moisture, vegetation phenology. One disadvantage is that imperviousness is restricted to one land cover only. This was exactly compensated by the land cover classification map. Image classification provides specific land use classes and is, therefore, suitable to detect and characterize land use flows from previously open spaces to urban land in more detail. The conference presentation will show how satellite image analysis helps to validate and improve the information sourced from land use statistics, and how the incora project results contribute to more up-to-date and consistent information on land take for the monitoring of sustainable development targets.
The mitigation of human micro-nutrient deficiency (MND) is a major aim of United Nations Sustainable Development Goal 2 to “End hunger, achieve food security and improved nutrition and promote sustainable agriculture” by 2030. MND has been coined the “hidden hunger” because food supply may be sufficient but lack quality in terms of vitamins and minerals. Hidden hunger can lead to serious health problems including impaired physical and mental human development, susceptibility to various diseases, and reduced learning capacity. The prevalence of MND is assessed at the national level with Food Consumption Tables (FCTs) which report the nutrient content of grain or other crop yield. FCTs are seldom updated and many countries who lack resources employ FCTs from other countries. In doing so, it is assumed that the nutrient content is relatively stable over space and time. This assumption is a major source of uncertainty in nutrition analysis, because in reality nutrient content varies according to various crop growth conditions.
Satellite image data may be able to improve estimates of nutrient content, because they are able to monitor crop growth and development frequently over large areas. The bulk of data however consist of a few broad ( > 10nm) spectral bands. These bands are too coarse to distinguish many of the biophysical properties and biochemical processes related to nutrient content. Hyperspectral image data consist of several narrowbands that are able to make this distinction. Until recently, these data were collected mainly with spectroradiometers in the laboratory, field or mounted to drones and aircraft. The PRecursore IperSpettrale della Missione Applicativa (PRISMA) platform was launched on March 22, 2019. It is the first spaceborne hyperspectral mission in almost 20 years and ushers in a new era in such missions.
In our study, we used PRISMA hyperspectral narrowbands to predict the concentration of eight important nutrients (Ca, N, S, P, K, Fe, Mg, Zn) in four important global staple grains (corn, rice, soybean, wheat). We compared performance of PRISMA to Sentinel-2. Sentinel-2 is a multispectral broadband platform, but contains five experimental red-edge and near infrared narrowbands. The images were collected at the three main stages of crop development (vegetative, reproductive, maturity) in the 2020 growing season. A field campaign was conducted concurrently to sample grains over 60×60m2 survey frames. The nutrient content of the grains was determined in a laboratory using inductively coupled plasma - optical emission spectroscopy and a carbon, hydrogen, and nitrogen analyzer. The nutrients were predicted using Random Forest.
The accuracy of the PRISMA-based predictions (mg kg-1) of the nutrient composition of wheat ranged between a minimum of R2 = 0.49 (RMSE = 0.25) for N and a maximum of R2 = 0.74 (RMSE = 9.20) for Zn. The prediction accuracies for rice ranged from R2 = 0.54 (RMSE= 448.01) for P and to R2 = 0.73 (RMSE = 80.33) for S, for corn from R2 = 0.51 (RMSE = 279.17) for P and R2 = 0.73 (RMSE=99.92) for S. For soybean, the highest prediction accuracy was obtained for Ca (R2 = 0.88, RMSE=241.99). Nutrient composition predictions for wheat using Sentinel-2 ranged from R2 = 0.4 (RMSE = 0.29) for N to a maximum of R2 = 0.76 (RMSE = 373.5) for K, from R2 = 0.39 (RMSE= 12.17) for Fe. Good prediction results were obtained for rice, especially for Ca (R2 = 0.55, RMSE = 90.38), P (R2 = 0.51, RMSE = 235.4) and K (R2 = 0.50, RMSE = 288.7). Nutrient composition of soybean showed better prediction accuracy than the other crops with R2 >0.78 for all target nutrients. The sensitive spectra for various nutrients varied across the four investigated crops but they were mainly concentrated in the visible and Short-wave infrared (SWIR) regions for RPISMA data and, NIR, red-edge and SWIR regions for Sentinel-2.
Our study represents a first important step towards using remote sensing imagery for predicting the nutrient concentrations of crops. Future work will focus on testing the robustness of our predictions across larger geographic areas and several growing seasons.
In the framework of the United Nations (UN) 2030 Agenda for Sustainable Development and the New Urban Agenda (Habitat III), local and regional authorities require indicators at the intra-urban scale to design adequate policies in support of the Sustainable Development Goal (SDG) 11: Make cities and human settlements inclusive, safe, resilient and sustainable. Nevertheless, the current literature provides mainly national, regional and city scale indicators. Earth Observations (EO) data have been recently recognized as an essential source of information to achieve the SDG 11 targets and progress measurements with respect the SDG 11 indicators. However, the complexity of EO data handling and processing in SDGs monitoring and reporting mechanisms makes difficult a direct integration in evidence-based decision-making process.
In order to fill such gaps, this work presents the development and implementation of a set of workflows aimed at the automatic computation of some SDGs 11 indicators at intra-urban scale. A workflow is a process for generating knowledge from observation/simulation data and scientific models. The Virtual Earth Laboratory (VLab) framework (Santoro et al., 2020) was used as a cloud-based platform for sharing and facilitating the invocation of such scientific workflows from urban planners or technical employers of public administration without an extensive expertise on the EO domain. VLab implements all required orchestration functionalities to automate the technical tasks required to execute a model on different computing infrastructures, minimizing the possible interoperability requirements for both model developers and users.
A first workflow has been designed to extract essential variables devoted to the study of urban ecosystems such as the settlement map (built-up) and the population density map. The former can be obtained from a semi-automatic Sentinel-2 data classification procedure or directly by downloading the available European Settlement Map from the Copernicus Land Monitoring Service. Both maps are available at 10 m spatial resolution. Concerning the second variable, a specific workflow was developed for generating a population density map at a fine grid size (100 m X 100 m) from the ancillary population data per census area (Aquilino et al., 2020).
Additional workflows have been implemented for the computation of SDG 11.2.1 “Proportion of population that has convenient access to public transport” and SDG 11.3.1 “Ratio of land consumption rate to population growth rate.” The output maps are generated at regular grid of 100 m spatial resolution size.
Ancillary input, such as the population census data, as well as the local public transport map and additional auxiliary data need to be obtained from local authority providers. The diffusion of open data web portals results promising for the acquisition of such data for many cities.
The height of the buildings information (e.g., LiDAR data) is an optional input that, if available, allows to generate a population density map by applying the improved approach suggested by (Aquilino et al., 2021).
The workflows are validated considering the three Italian towns of Bari, Bologna and Reggio Calabria.
For reproducibility in other cities, the workflows are flexible to a wide variety of formats and geographical reference systems as input. GDAL/OGR standard formats are accepted. An advanced version of workflows is available for those expert users who intend to customize configuration parameters of the models. Besides, VLab frameworks make available a set of Web APIs designed to enable application developers to create dedicated Web applications based on models already available in VLab. By exploiting these latter functionalities, a dedicated web application will be developed as a tool for urban planner and policy-making to make EO data integration in SDGs measuring and monitoring operational for countries.
Aquilino, M.; Adamo, M.; Blonda, P.; Barbanente, A.; Tarantino, C. (2021). Improvement of a Dasymetric Method for Implementing Sustainable Development Goal 11 Indicators at an Intra-Urban Scale, Remote Sensing, Special Issue “Earth Observations for Sustainable Development Goals”, 13, 2835, https://doi.org/10.3390/rs13142835
Aquilino, M.; Tarantino, C.; Adamo, M.; Barbanente, A.; Blonda, P. (2020). Earth Observation for the Implementation of Sustainable Development Goal 11 Indicators at Local Scale: Monitoring of the Migrant Population Distribution, Remote Sensing, Special Issue “EO Solutions to Support Countries Implementing the SDGs”, 12(6), 950, ISSN: 2072-4292, doi:10.3390/rs12060950
Santoro, M., Mazzetti, P., & Nativi, S. (2020). The VLab Framework: An Orchestrator Component to Support Data to Knowledge Transition. Remote Sensing, 12(11), 1795. https://doi.org/10.3390/rs12111795
The SDGs are a universal agenda to address the world’s most pressing societal, environmental, and economic challenges. Robust monitoring mechanisms and timely, accurate and comprehensive data are essential in guiding policies and decisions for successful implementation of the SDGs. Yet official statistics alone cannot provide all of the data needed to populate the SDG indicator framework. Along with EO data, citizen science, briefly defined as public participation in scientific research and knowledge production, offers a new solution and an untapped opportunity to complement traditional sources of data for monitoring progress towards the SDGs. The complementarity of citizen science and EO approaches for SDG monitoring has also been acknowledged in recent literature. For example, Fraisl et al (2020) carried out a systematic review of all SDG indicators and citizen science initiatives, demonstrating that citizen science data are already contributing and could contribute to the monitoring of 33 per cent of the SDG indicators. As part of this review, Fraisl et al. also identified overlap between contributions from citizen science and EO for SDG monitoring based on the mapping exercise undertaken by GEO (2017). The GEO analysis demonstrated that EO data can contribute to the monitoring of 29 SDG indicators, and Fraisl et al showed that citizen science could support 24 out of these 29 SDG indicators, which shows the complementarity between both data sources. One specific citizen science tool integrating citizen science and EO approaches that could complement and enhance official statistics to monitor several SDGs and targets is Picture Pile. Designed to be a generic and flexible tool, Picture Pile is a web-based and mobile application for ingesting imagery from satellites, orthophotos, unmanned aerial vehicles or geotagged photographs that can then be rapidly classified by volunteers. Picture Pile has the potential to contribute to the monitoring of fifteen SDG indicators covering areas, such as deforestation, post disaster damage assessment and identification of slums, among others, which can provide reference data for the training and validation of products derived from remote sensing.
This talk presents the potential offered by Picture Pile and other citizen science tools and initiatives to complement and enhance official statistics to monitor several SDGs and targets. Another example of the use of citizen science for SDG monitoring that will be highlighted in this talk is the Citizen Science for the SDGs project implemented in Ghana for monitoring SDG indicator 14.1.1b on plastic debris density. This project is a partnership between IIASA, the Ghana Statistical Service, the Ghana Environmental Protection Agency, UNEP, Ocean Conservancy, Earth Challenge, and others. The main achievement of the project is that citizen science data for monitoring beach litter have been integrated into the official SDG monitoring and reporting mechanisms of Ghana, which makes Ghana the first country to report on SDG indicator 14.1.1b and the first country using citizen science data for that purpose. Additionally, these data will serve as inputs to Ghana’s Ocean Plan, currently under development, as well as other relevant policies to address the marine litter problem. At the end of the talk, recommendations will be provided for how to enable partnerships and collaborations across data communities and ecosystems in order to bring citizen science and EO data into official statistics for SDG monitoring and reporting.
Coastal eutrophication is a global challenge that can result in harmful algal blooms, hypoxia, fish kills and other negative environmental impacts. To track the status of coastal eutrophication, Sustainable Development Goal 14 (as set by the United Nations General Assembly) includes an indicator, 14.1.1a - Index of Coastal Eutrophication Potential. Indicator 14.1.1a monitors changes in eutrophication directly, by analysing nutrients, or indirectly, by analysing processes that are caused by or are related to nutrient inputs such as algal growth. One of the challenges for tracking global eutrophication is lack of globally available and comparable nitrogen and phosphorus measurements for coastal areas. To address this gap, GEO Blue Planet, Esri and the UN Environment Programme have developed sub-indicators for the methodology to report on SDG 14.1.1a that use global satellite-derived chlorophyll-a products as outlined in United Nation Environment Programme’s Global Manual on Measuring SDG 14.1.1, SDG 14.2.1 and SDG 14.5.1. These global indicators are part of a progressive monitoring approach that seeks to build a foundation of available data that countries can build upon as they develop capacity for reporting on regional and national satellite data and in situ data. In this talk, we will present these global indicators along with efforts to work with member countries to utilize the resulting data for decision making and additional dashboards and visualizations being produced to assist with the identification of potential eutrophication hot spots and inform further analysis. The indicators include an indicator derived from the global ocean, 4km spatial resolution per pixel monthly mean product of the Ocean Colour Climate Change Initiative project’s product for each pixel within a country’s EEZ and an indicator derived from the National Oceanic and Atmospheric Administration’s VIIRS chlorophyll-a ratio anomaly product produced for the globe at 2km spatial resolution. The indicators will be compared to in situ data from member countries. Challenges to data access, product validation and other issues will be addressed.
ADVANCED EARTH OBSERVATION SATELLITE TECHNOLOGY. AN INTEGRATED SYSTEM TO SUPPORT THE ACHIEVEMENT OF SUSTAINABLE DEVELOPMENT GOALS AT THE COLOMBIAN AMAZON
Quiñones, M.J; Vissers, M.; Hoekman, D.; Kooij, B.; Luiken, R.; VanRooij, W & Pratihast, A.K.; Murcia, U.; Gómez, L.A.; Cuchía, A.; Acosta, H.H.; Carvajal, H.E.; Gil, C.; Rojas, A. & Erazo R.
In the frame of the ESA supported “EOSAT 4 Sustainable Amazon” project, advanced earth observation (EO) radar satellite technology was demonstrated to support the achievement of Sustainable Development Goals (SDG) for the Colombian Amazon. In close collaboration with Colombian governmental and non-governmental stakeholders, working for the sustainable development of the Colombian Amazon, EO radar-based products were defined in terms of required thematic information, spatial and temporal resolution.
Standardized Radar pre-processing steps follow automatic routines using in house developed algorithms and other commercial software like Gamma and IDL. Data downloads, interferometric registration, multitemporal filtering and orthorectification, precede thematic processing using in house time series analysis algorithms like SarSentry, SarFlood and baseline mapping algorithms like SarEcomap and SarSoil. Thematic Products include: 1) Deforestation historical change time series 2007-2017, based on ALOS-Palsar data; 2) Deforestation and degradation historical change time series 2017-2020, based on Sentinel-1 time series; 3) Baseline Ecosystem Map 2017 with 40 vegetation structural classes based on the classification of radar and optical data; 4) Flooding dynamics maps and flood frequency map, based on ALOS PALSAR ScanSAR data; 5) Time series 2007-2020 Fire dynamics time series 2017-2020, data from MODIS and VIIRS systems; 6) Soil degradation map-Area of Florencia 2017-2020, based on combined classification of ALOS PALSAR and Sentinel-1; 7) High resolution ecosystem mapping; 8) River border and island dynamics 2017-2020, based on Sentinel-1 time series analysis; 9) Location of Rock formations: Tepuis, based on SRTM analysis and 10) Near real-time monitoring for deforestation and degradation in 2021. Products were presented to the users through an in-house developed web-based GIS platform, created especially for the efficient exploration and integration of the data including some analysis tools.
Product evaluation and validation was done both by producers and users. Validation included the use of georeferenced aerial photographs acquired by the users and the comparison with existing available data sets, like thematic maps and deforestation information form the Biomass, GLAD-2 and RADD systems. In addition, the usefulness of the data was evaluated using a dedicated questionnaire. In general, all products were considered to be of excellent quality in terms of contents, and spatial and temporal resolution to monitor the environmental conditions of the Colombian Amazon. Some unique thematic information like the forest degradation, flood frequency, soil degradation and high-resolution ecosystem mapping exceeded the expectation of the partners, and were considered as unique and necessary to achieve the SDG. In general, the EO products created in the frame of this project can support the monitoring of the implementation of Sustainable Development Goals. Specifically: 1) Creation of new protected areas; 2) Improved management of protected areas; 3) Support Land use plans and Life plans for indigenous communities; 4) Climate change monitoring; 5) Sustainable Forest management; 6) Land and forest restoration projects; 7) Support Productive systems and value chains for green business; and last but not least communication and sensitization on environmental crisis to the Colombian civil society.
An integrated Monitoring system was proposed and evaluated by the users, in order to integrate the developed EO based products into the different activities supporting the achievement of the SDG. The frame consists of three main components: 1) Data production; 2) Data integration and 3) Data communication.
ESA and JAXA has been developed in the context of observation, monitoring and study of the Earth’s surface and atmosphere from space, with a view to cooperate for the use of synthetic aperture radar (SAR) satellites in the fields of earth science and earth observation applications.
ESA and JAXA recognize and agree to have an agreement for SAR cooperation. Since ESA and JAXA have both developed new generation L-band SAR missions, ESA and JAXA recognize the value to share an important experience in operational use of L-band SAR and intend to increase the benefits of synergies in the use of C- and L- band spaceborne assets. To proceed this cooperation, ESA and JAXA agreed to share the existing available SAR data from Sentinel-1 in Copernicus program and from ALOS-2 in JAXA to validate the value of C-band and L-band data to mutual interest area.
At present, both agencies jointly work for Polar Area Monitoring, Forest and Wetland Mapping, Ocean Monitoring, Snow Water equivalent, Soil Moisture, Monitoring Agriculture and GHG, Urban Monitoring, Natural and Urban Forest Monitoring, Monitoring of Geohazards and Joint validation Algorithm development of SAR.
In this session, invited speakers are expected to report ongoing and planning SAR satellites missions including ALOS-2, ALOS-4, Sentinel-1 and ROSE-L. Invited speakers are also expected to report the joint science and application early results using Sentinel-1 and ALOS-2 with ground-based observation data.
During the yearlong MOSAIC expedition (2019-2020) a significant number of synthetic aperture radar (SAR) images were collected from different sensors and in different modes. Here, we investigate the change in polarimetric features over sea ice from the freeze up to the advanced melt season using fully polarimetric L-band images from the ALOS-2 PALSAR-2 and C-band images from the RADARSAT-2 satellite SAR sensors. The sea ice is separated into four different sea ice types: (1) lead, (2) young (YI), (3) level (second-year ice (SYI) and first year ice (FYI) and (4) deformed sea ice.
Data and Method
R/V Polarstern drifted with two different floes and here we focus on the first drift that took place between 1 October 2019 and 31 July 2020. Areas of all four different ice types are observed in the vicinity of R/V Polarstern. These areas are included whenever possible in the yearlong time series of sea ice types. Though to densify the time series images not containing the ship are also included.
The SAR images were analyzed for seasonal changes in backscatter intensity values, and the scattering mechanisms were employed to further investigate separability between the different ice types. In particular there was a focus on the separability between the high backscatter older sea ice, typically SYI, and YI and FYI. Both sets of L- and C-band images were radiometrically calibrated using the included meta-data information and a 9 × 9-pixel median filter is applied to the data to reduce the noise effect before extraction of the polarimetric features. The normalized radar backscatter information for the HH, HV and VV channels were extracted together with the polarization difference (PD, VV-HH on a linear scale), the co-polarized and cross-polarized ratios (VV/HH and HH/HV) and the circular correlation coefficient. The images were incidence angle corrected to 35◦ using the method and slope values outlined in Mahmud et al. (2018), and the different sea ice types were identified using manually drawn regions of interests (ROIs). Data from helicopter borne instruments as well as in-situ data was used to evaluate the different sea ice types. The results are also compared to images collected during the Norwegian Young Ice (N-ICE) campaign in January-June 2015 (Johansson et al, 2017, 2018).
Results and discussion
Analyzing the backscatter values several observations can be made: (i) as expected we observe that there is a larger difference in the co-polarization channels between smooth and deformed ice in L-band compared to C-band during the freezing season, though (ii) this separation is significantly reduced during the early melt season. Moreover, we observe (iii) larger differences between young ice and deformed ice backscatter values in the L-band data compared to the C-band data, and (iv) linear kinematic features (LKF) are easier to detect in the L-band images. Throughout the year the HV-backscatter values show larger differences between level and deformed sea ice in L-band than C-band. The L-band data variability is significantly smaller for the level sea ice compared to the deformed sea ice, and this variability was also smaller than that observed for the overlapping C-band data. Thus L-band data could be more suitable to reliable separate deformed from level sea ice areas, as well as investigating the LKFs.
Within the L-band images a noticeable shift towards higher backscatter values in early melt season compared to the freezing season for all polarimetric channels is observed. Though no such strong trend is found in the C-band data. The change in backscatter values is first noticeable in the C-band images and later followed by a change in the L-band images, probably caused by their different penetration depth and volume scattering sensitivities. This change also results in a smaller backscatter variability for all polarimetric channels.
PD show a seasonal dependency for the smooth and deformed sea ice within the L-band data. For the L-band data were the PD variability for all ice classes reasonably small for the freezing season, with a significant shift towards larger variability during the early melt season, though during this period the mean PD values remained similar. However, once the temperatures reached above 0°C both the variability and the mean values increased significantly. For the C-band data no such trend is observed. However, for C-band the absolute PD values show significantly higher mean values for the thinner sea ice areas regardless of if these areas are low or high backscatter, and these areas also have low standard deviations. Compared to the high backscatter areas offered by SYI there is a significant difference were the PD values have a high standard deviation but a low mean value. Using PD, we can therefore separate out the young ice types from the surrounding sea ice and the SYI types. PD is also suitable for separation between the level and deformed sea ice areas during the freeze-up, as the variability is much higher for the deformed sea ice areas than for the level ice areas. To confirm the roughness level from the different ice types the circular correlation coefficient (CCC) is calculated, compared to airborne laser scanner (ALS) data and show good separability between the deformed and level sea ice types. However, CCC is sensitive to the signal-to-noise levels and care must be taken when analyzing the results. PD on the other hand has a small incidence angle dependency and a low sensitivity to the signal-to-noise ratio (SNR).
We observe that fully polarimetric C- and L-band data are complementary to one another and that through their slightly different dependencies on season and sea ice types, a combination of the two frequencies can aid improved sea ice classification. The availability of a high spatial and temporal resolution dataset combined with in-situ information offered during the MOSAiC expedition ensures that seasonal changes can be fully explored.
References
Johansson A.M., C. Brekke, G. Spreen, J. King, 2018, X-, C-, and L-band SAR signatures of newly formed sea ice in Arctic leads during winter and spring, Remote Sensing of Environment, 204: 162-180
Johansson A.M, King J.A., Doulgeris A.P., Gerland S., Singha S., Spreen G., Busche T, 2017, Combined observations of Arctic sea ice with near-coincident co-located X-band, C-band and L-band SAR satellite remote sensing and helicopter-borne measurements, JGR-Oceans, 122: 669-691
M. S. Mahmud, T. Geldsetzer, S. E. L. Howell, J. J. Yackel, V. Nandan and R. K. Scharien, 2018, Incidence Angle Dependence of HH-Polarized C- and L-Band Wintertime Backscatter Over Arctic Sea Ice, IEEE Transactions on Geoscience and Remote Sensing, 56(11): 6686-6698
Strong winds induced by typhoons and hurricanes cause disasters and have a great impact on social activities, therefore there is an increasing demand for their monitoring and prediction. A Synthetic Aperture Radar (SAR) is only satellite sensor capable of measuring sea surface winds with high spatial resolution O (100 m). For wind speed detection by Japanese L-band SAR named the Phased Array type L-band Synthetic Aperture Rader-2 (PALSAR-2) and use for operational weather forecasting under typhoon conditions, Japan Aerospace Exploration Agency (JAXA)-Meteorological Research Institute (MRI) joint research has launched. The purpose of the research is to verify the effect of the data assimilation of the SAR-retrieved winds on typhoon forecasting. Typhoons/hurricanes observations were being carried out under the joint research by programming the PALSAR-2 observations based on the predicted course of typhoons and hurricanes. So far, simultaneous observations with the National Oceanic and Atmospheric Administration (NOAA)'s airborne Stepped Frequency Microwave Radiometer (SFMR) have been made for five cases of hurricanes. Based on these data, estimating the wind structure of the hurricane by PALSAR-2 was developed.
The 3 km average PALSAR-2 normalized radar cross section (σ0) and the incidence angle were collocated with the SFMR-measured ocean surface wind speed and rain rate. It was confirmed that the incidence angle dependence was small for the cross-polarized (HV) σ0, so we developed a model function for the strong winds for the HV polarization. In order to investigate the dependency of σ0 on wind speed and incidence angle, the match-ups were classified into “bins” of 2 m/s wind speed and 5° incidence angle. Any data of which deviation exceeded 2σ in each bin were excluded.
A relationship between the PALSAR-2 HV σ0 and ocean surface wind speeds measured by SFMR showed that σ⁰ increased with respect to the wind speed up to about 55 m/s. Based on the method proposed by Hwang et al. (2015), a geophysical model function (GMF) was constructed as a function of wind speed and incidence angle. The wind speed was then inversely estimated from the matchup data (HV σ⁰) and compared with the wind speed of SFMR. Bias and RMSE are -0.2 m/s and 4.1 m/s, respectively. It indicates that the wind speed can be detected up to about 50 m/s or more without depending on the incidence angle.
The derived GMF was applied to the PALSAR-2 HV image of Hurricane Laura to calculate the ocean surface wind speed, and the comparison was performed along the SFMR observation tracks. Although there are some biased differences, fluctuation trends including maximum wind speed of about 60 m/s and sudden changes in wind speed near the eyewall are captured. The derived wind speed structure of the hurricane was compared with the best track data. Omnidirectional surface wind profiles as a function of distance from the hurricane center for the four geographical quadrants (NW, SW, SE, and NE) were calculated from the PALSAR-2-derived wind speed and compared with wind speed radii at three wind speed levels (34 Knot, 50 Knot, 64 Knot) obtained from the best track data. Wind speed radius is smaller in NW and SW than in NE and SE, which indicates the same spatial asymmetry structure as the best track. In addition, the absolute value of the wind speed radius and the decreasing tendency with respect to the distance are approximately the same.
A comparison of hurricane sea surface winds calculated from L-band PALSAR-2 and C-band Sentinel-1 was performed. The Sentinel-1 wind product was obtained from the Earth Observation Data Access (EODA) (https://eoda.cls.fr/client/oceano/) operated by Collecte Localisation Satellites (CLS). Among the typhoons and hurricanes observed by PALSAR-2, there were two cases where wind speed calculation was also carried out by Sentinel-1.
The first case is Hurricane Douglas, observed by Sentinel-1 at 3:59 on July 25, 2020 (UTC), and by PALSAR-2 at 9:41 on July 26, approx. 1 day and 6 hours later. According to the best track data, the hurricane has weakened during this period, and the sustained maximum wind speed has dropped from about 49 m/s to about 40 m/s. On the other hand, the maximum wind speeds by SAR were 52.8 m/s and 42.9 m/s, respectively, which is almost consistent with the best track data in terms of the decreasing tendency and width.
Another example is Hurricane Laura, which was observed by PALSAR-2 at 17:49 on August 26, 2020, and approx. 6 hours later by Sentinel-1. According to the best track data, the hurricane has been strengthened from about 60 m/s to 69 m/s in the 6 hours. The maximum wind speed of SAR is 65 m/s and 76 m/s, which tend to be strengthened in the same way as the best track data although absolute values are large.
It was confirmed that the L-band HV σ⁰ has a relationship with wind speed up to about 55 m/s in the data used in the present study and the wind speed can be estimated. On the other hand, the stronger the wind, the lower the increasing rate of σ0 with respect to the wind speed, so the radiometric accuracy of the SAR product has a strong impact on the wind speed estimation especially under the extreme wind condition. The comparison with C-band's Sentinel-1 product showed that increasing observation opportunities could enable detailed detection of hurricane temporal evolution at high spatial resolution.
As a next step, it is expected that the effect of these data on typhoon forecasting will be verified through data assimilation experiments. In addition, it is necessary to improve the detection accuracy by accumulating data and clarify the characteristics due to the difference in bands such as the maximum detectable wind speed and the rain contamination.
The complementarity between C- and L-band Synthetic Aperture Radar (SAR) images for the separation of sea ice types and identification of ice structures such as ridges or ice floe edges was first systematically analyzed when the first airborne multi-frequency data over sea ice were acquired in the late 80s. Since then, differences in the scattering characteristics and advantages of each band in ice classification and ice parameter retrieval have been analyzed and presented in various studies. Also, different supervised and unsupervised classification methods were applied to combinations of C- and L-band multi-polarization images, which showed improvements in classification accuracy. However, in contrast to data from different C-band satellite missions, L-band images were not available for operational ice mapping on a regular basis, which means that the gain of adding L-band to operational interpretation of ice conditions has not been evaluated in detail yet. For the identification of icebergs in open water and in sea ice, comparisons of C- and L-band images were lacking.
In a recent project of the European Space Agency (ESA) supported by the Japan Aerospace Exploration Agency (JAXA), the synergies between C- and L-band SAR missions for the retrieval of ice conditions and iceberg occurrences have been explored. The project aims to better define the benefits of future SAR missions working together at C- and L-band e.g. as part of a multi-agency international constellation of radar missions in the post-2026 time frame. The project team involves researchers from different universities and research institutes; and analysts from operational ice services in Canada (CIS), Denmark (DMI), and Norway (Met.No.). Input is also provided by the International Ice Patrol (IIP) and by a task team of the International Ice Charting Working Group (IICWG). The work comprises a literature study (e.g. [1]), and comprehensive analyses of Sentinel-1 and PALSAR-2 images acquired over different test sites in the Arctic: Labrador Sea, Baffin Bay, Lincoln Sea, Fram Strait, Belgica Bank and the Cape Farewell region. All-in-all, more than 1000 PALSAR-2 images have been acquired in WBD and FBD mode of which many were interpreted and analysed as stand-alone. About 200 images have a sufficient spatial overlap with and a sufficiently short time difference in acquisition to Sentinel-1 images at EWS or IWS mode over Lincoln Sea, Fram Strait, and Belgica Bank. For the Labrador Sea and Baffin Bay, 59 PALSAR-2 WBD images could be compared to Radarsat-2 SCWA images. In our presentation, we give an overview of the results achieved during project work. It has to be noted that pros and cons determined for each band also depend on the respective image properties (spatial resolution, number of looks, noise level, local incidence angle). The experts from the operational centres found that L-band is superior for earlier detection of fractures and fast ice breakup, for easier FY/MY discrimination during the melt season, for recognizing more structures in the ice, and for better discrimination of ridges. The latter, however, requires Stripmap mode but is not possible at the coarser ScanSAR mode. Because of the very low backscattering level of young thin ice at L-band it can be better distinguished from thicker ice. The weaknesses of L-band are the low separability of thin ice types relative to one another and relative to open water. The latter affects determination of ice concentration. At L-band, multi-year ice floes appear less prominent relative to first-year ice under freezing conditions. L-band is less sensitive to wind / sea state compared to C- or X-band which is, e.g., important for the Cape Farewell and the Labrador Sea region. Specifically for icebergs located in open water, it was observed that the detection rate depends on sea state and is decreased in melting conditions. L-band seems to be better at detecting icebergs in rough seas than C-band. Icebergs inside sea ice are easier to identify in L-band HV-images than in Sentinel-1 HV-images. Another topic in the project was to test classification and detection algorithms which also requires to align PALSAR-2 and Sentinel-1 images corresponding to each other but acquired with a temporal gap. This has been mainly carried out by the university partners. For moderately dynamic ice conditions with distinct ice structures, the alignment of C- and L-band image pairs was possible for time separation of hours up to even a few days in a very stable ice cover, but not always over the full overlap area between the C- and L-band image. For some ice regimes (e.g. South Greenland, Labrador Sea) and during the melting season, an alignment is extremely difficult or not possible. Experiments with supervised classification in a decision tree reveal for which specific ice conditions the additional use of L-band is beneficial. Investigations of icebergs captured in fast ice show that they appear brighter relative to the background (the ice) in L-band than in C-band imagery, and that icebergs and sea ice are in general difficult to distinguish at C-band. At L-band internal reflections in the interior of an iceberg increase the backscattering and may cause ghost reflections next to icebergs. The conclusion is that L-band SAR imagery clearly provides an advantage for ice charting and iceberg detection. Requirements are regular acquisitions with a sufficient coverage of the sea ice regions monitored by the operational centres and timely availability of the images. Acquisitions in a tandem mode, i.e., with a C- and L-band SAR satellite pair collecting their data with the smallest possible time gap between them, would benefit in particular automated dual-frequency classification and detection.
The following members of the project team contributed to the investigations that will be reported: Melanie Lacelle, Tom Zagon, and Benjamin Deschamps from CIS, Keld Qvistgaard from DMI, Nick Hughes from Met.No., Mike Hicks from IIP, Leif Eriksson, Anders Hildeman from Chalmers University of Technology, Denis Demchev from Nansen Environmental and Remote Sensing Center, and Johannes Lohse, Laust Færch, and Anthony P. Doulgeris from the Arctic University of Norway / Tromsø.
[1] Dierking, W., Synergistic Use of L- and C-Band SAR Satellites for Sea Ice Monitoring, Proceedings 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, DOI: 10.1109/IGARSS47720.2021.9554359
In this paper, we present recent results of volcano monitoring using the Advanced Land Observing Satellite-2 (ALOS-2) and Sentinel-1 satellites. ALOS-2 carries an L-band synthetic aperture radar (SAR) named PALSAR-2 and enables quick disaster response with high resolution (1 by 3 meters by the Spotlight mode and 3 to 10 meters by the Stripmap mode). Sentinel-1A/B satellites carry C-band SAR and focus on periodic observations (every 6 or 12 days) with medium spatial resolution (20 meters by the Interferometric Wide mode).
Mount Nyiragongo located in the eastern area of the Democratic Republic of the Congo erupted in May 2021. During the eruption, we estimated quasi-vertical displacements and quasi-east-west displacements around the volcano by multi-angle interferometric SAR (InSAR) analysis using the data from ascending and descending orbits. The results from both ALOS-2 and Sentinel-1 data implied the occurrence of dyke intrusion by underground magma, but the two results had different characteristics. Sentinel-1 (C-band) showed more fringes corresponding to the shorter wavelength and higher sensitivity for small displacements. ALOS-2 (L-band) showed higher coherence in largely displaced and highly vegetated areas, as the longer wavelength has higher penetration capability.
Kilauea Volcano in Big Island, Hawaii began erupting in May 2018 and caused fissures in residential areas. We performed InSAR, multiple aperture interferometry (MAI), and polarimetric analysis using a series of ALOS-2 data. The displacements derived from InSAR and MAI of Stripmap and ScanSAR data revealed the presence of dyke intrusion in the East rift zone and subsidence in Kilauea Caldera. Using polarimetric images by HH and HV polarization data, we also created a map of lava flow in Leilani Estates. Moreover, a series of Spotlight mode data captured the detailed process of the collapse of Halemaumau Crater in Kilauea Caldera.
In conclusion, multi-frequency (L-/C-bands), multi-angle (ascending/descending), multi-mode, and multi-polarimetric SAR data provides us with various information on tectonic and surface changes to better understand volcanic activities.
Land use change and land management accounts for approximately 14% of total global anthropogenic CO2 emissions, predominantly due to forest fluxes, but with an uncertainty of around ±50%, it is the most uncertain term in the carbon budget1,2. The Land Use Land Use Change (LULUCF) sector is unique because it not only includes anthropogenic Greenhouse Gas (GHG) emissions but also considers anthropogenic sinks such as removals from afforestation/reforestation and, to different extents, forest conservation on lands considered to be “managed”. The numerous approaches to estimate LULUCF each have their own scope, intentional use, input datasets and methods. Recent analysis found a difference of ~5.5GtCO2yr-1 between national GHG inventories (NGHGI) and global models3. This difference can largely be explained by the extent to which each approach considered the forest sink to be “managed” and thus anthropogenic3.
It has been hoped that Earth Observation-based estimates might reduce the uncertainty in the LULUCF flux. They can support climate policy by providing local, spatial detail to aid countries’ reporting and verification of their NGHGI as well as potentially providing a benchmark to evaluate the land sink from global models in a globally consistent manner. However, as noted in the (draft) IPCC 6th Assessment Report, there are large differences between CO2 flux estimates based on activity data of forest cover loss and gain from global Earth Observation data4 and other methods1-3. The comparison in the draft IPCC 6th Assessment Report showed that, globally, Earth Observation-based estimates4 find a considerable global forest sink of -6.7 GtCO2 yr-1 during the period 2001 – 2019 in non-intact (i.e. managed) forests. In contrast, the global LULUCF flux is estimated to be a small net sink of -1.1 ±1.0 GtCO2 yr-1 in NGHGIs3 and a considerable net source of 5.7 ±2.6 GtCO2 yr-1 in global bookkeeping models2 for the period 2010 - 2019. This Earth Observation result therefore increases uncertainty in the LULUCF net flux, as noted in the (draft) IPCC 6th Assessment Report. Understanding the reasons for the differences is crucial if Earth Observation is to be used in the context of supporting countries inventories and national/global policy.
Here we conduct a detailed comparison of the methods and datasets used to produce inventories and the Earth Observation-based estimates of forest-relaxed carbon fluxes2. We aim to (i) quantify the differences between estimates of forest-related GHG fluxes based on Earth Observation data and NGHGI; (ii) provide an understanding of the potential reasons for the differences on a (sub)country-level scale; and (iii) provide recommendations for policy makers and inventory compilers on the best approach to using global Earth Observation within reporting and verification.
We use different land cover masks to compare fluxes from the Earth Observation study4 across equivalent areas of forest that could be considered managed according to national inventories. Results show the Earth Observation based estimates of the non-intact forest flux to be a large net carbon sink, whilst all other methods suggested the LULUCF sector to be a small net source on a global scale, with similar findings regionally. We have carried out an initial detailed analysis focusing on Brazil and the Amazonian biome where there is in-country remote sensing data available at high spatial and temporal resolution, which are used during the compilation of the NGHGI. We have identified several reasons for differences and discuss implications for improving methods as well as comparability between Earth Observation approaches and inventories.
References:
1 Jia et al., Chapter 2 Land-Climate interactions. In: Climate Change and Land: an IIPCC special report on climate change, desertification, land degradation, sustainable land management, food security and greenhouse gas fluxes in terrestrial ecosystems. Editors: Shukla et al. 2019.
2 Friedlingstein et al. Global Carbon Budget 2021. Earth Syst. Data Discuss (preprint), https://doi.org/10.5194/essd-2021-386, in review, 2021.
3 Grassi, G. et al. Critical Adjustment of land mitigation pathways for assessing countries’ climate progress. Nature Climate Change. 11, 425-434 (2021). https://doi.org/10.1038/s41558-021-01033-6
4 Harris, N.L. et al. Global maps of twenty-first century forest carbon fluxes. Nat. Clim. Chang. 11, 234–240 (2021). https://doi.org/10.1038/s41558-020-00976-6
Biomass, and especially forest biomass, plays a crucial role in sequestering and storing carbon and is a key component of both national Greenhouse Gas (GHG) inventories and the global carbon budget. Accordingly, mapping aboveground biomass is a priority of space agencies such as NASA, ESA and JAXA and is highlighted by the several new and upcoming satellite missions, including GEDI, ICESat-2, BIOMASS, ALOS-2, ALOS-4 and NISAR. Using satellite biomass products seems a natural progression for country Parties when preparing their reports of GHG emissions and removals from the forest sector to the United Nations Framework Convention on Climate Change (UNFCCC). In particular considering the new 2019 Refinement to the 2006 IPCC Guidelines that include generic guidance on the use of biomass density maps for GHG inventories. However, the availability of multiple satellite products can be confusing to policy users and national technical teams. The Earth Observation (EO) global biomass monitoring community recognizes that widespread uptake of these biomass products requires their differences to be addressed, their accuracies known, and associated metadata provided to derive estimates compliant with the IPCC guidelines on bias and uncertainty. Furthermore, products need to be flexible to adhere to the land representation according to national definitions. A dedicated international effort coordinated through the Committee of Earth Observing Satellites (CEOS) and the Global Forest Observation Initiative (GFOI) R&D component is addressing some of the existing limitations that jeopardize the successful uptake of biomass products by the UNFCCC, including by national GHG inventory teams. The objectives of the CEOS Agriculture, Forestry and Other Land Use (AFOLU) group, and specifically its Biomass Harmonization Team, are to develop i) “best available” harmonized global forest biomass estimates from the next generation of biomass maps as input to the Global Stocktake (abstract by Duncanson et al.) and ii) examples of the practical implementation of the 2019 Refinement to the 2006 IPCC Guidelines to enhance the uptake of these maps by countries in their reporting to the UNFCCC.
Here we demonstrate these two objectives are only successfully achieved if the disconnect between producers and users is addressed and the interface for collaboration between EO global biomass monitoring, GHG inventory, and National Forest Inventory (NFI) experts is created. For the first objective, NFI plot data together with national expertise are required to understand and quantify the accuracy and usefulness of the maps. To achieve both objectives, we show how a collaborative interface was created between CEOS agencies and national teams from several countries (e.g., Japan, Paraguay, Peru, Solomon Islands, Wales UK) and emphasise the preparatory work required for an effective dialogue between the different groups. First, it is necessary to recognize that only individual countries, given their national circumstances, can determine if satellite-based data and derived products developed over large areas are suitable for use in national GHG inventories. To determine needs and requirements regarding the potential use of space-based biomass data we analyzed the information available in national submissions to the UNFCCC and in the reports from the technical review process, in particular in the section covering areas for future technical improvement. The second step is to understand the IPCC Guidelines, which IPCC variables we need biomass information for, what methods are used and assumptions made by national teams, and reflect on possible procedures to derive those estimates with associated bias and uncertainty according to the Guidelines. Finally, a mapping exercise between variables, areas for improvement of the estimates for those variables, and the characteristics of the available products support the discussions on how the data needs to be handled and presented following IPCC guidance and principles. We conclude with a demonstration on how this strategy and interface between CEOS agencies and national technical teams led by SilvaCarbon provided a few first examples of the successful uptake of maps in reporting to the UNFCCC and of the practical implementation of the 2019 IPCC Refinement.
Numerous global and regional biomass maps have been produced in recent years by multiple international programs using a combination of Earth observation, ground, and airborne data. Among these efforts, ESA’s Climate Change Initiative (CCI) Biomass project, has released global biomass maps for four epochs. Recent and upcoming space missions even have biomass maps as dedicated target products. Examples include the ESA Biomass mission (P-band SAR) and the NASA Global Ecosystem Dynamics Investigation (GEDI) mission (waveform lidar). The potential usefulness of such maps for greenhouse gas (GHG) reporting has been acknowledged by the Intergovernmental Panel on Climate Change (IPCC) in the form of new guidance on their use for national GHG inventories.
A common denominator of all the maps published to date, possibly with the exception of some maps produced by the GEDI mission, is that they lack the metadata required to estimate the uncertainty of map-based estimates of biomass or carbon stock for arbitrary geographical regions. As a consequence, estimates obtained directly from these maps fail to comply with the IPCC good practice guidelines, a serious shortcoming.
Biomass maps may be used in their existing forms to enhance estimates for any region of interest only if there already exists a ground-based inventory, such as a national forest inventory program, that has adopted a probabilistic field sampling design. In this case, the value of the maps as auxiliary data can be realized in the form of increased precision of the estimates, which can be substantial. However, many countries finding global biomass maps attractive in their efforts to estimate biomass, do not have extensive ground sampling programs.
In this study, being part of the ESA “Sentinel-1 for Science Amazonas” project, we demonstrate our inability to profit from an existing global biomass map – the ESA CCI 2017 biomass map – when estimating carbon stock across a 4.1 million square kilometer study region in the Brazilian Amazon Biome. To validate the map, we independently estimated mean carbon stock per unit area using an existing random sample of 501 airborne lidar transects with an approximate length of 12 km each collected by the Brazilian EBA project (“Estimativa de biomassa na Amazôni”), a coincident sample of 224 field plots, and hybrid estimators. A standard error and a confidence interval were also estimated for the mean estimate. Following the terminology of the Global Forest Observation Initiative’s “Methods and Guidance Document”, we considered this a “greater quality” estimate and, therefore, useful for validation purposes. We compared the lidar-based estimate to a mean estimate obtained from the CCI 2017 biomass map via “pixel counting”, i.e., estimating the mean biomass across all map units and applying a conversion factor to obtain a mean carbon stock estimate. Despite a substantial difference between the lidar-based and the map-based estimates, we were unable to conclude that this difference was statistically significant because the map-based estimate is just a point estimate without any accompanying information on the uncertainty.
Further, we discuss the nature of the metadata map producers must deliver with their map products to facilitate rigorous statistical inference based solely on the map products. We also illustrate the limitations of the uncertainty information that some of the current maps provide and argue that this information is insufficient to such a degree that uncertainty will be greatly underestimated.
National forest inventories (NFI) provide forest-related biomass and carbon information for country forest monitoring and greenhouse gas (GHG) accounting systems. While many tropical countries are actively working on their NFIs, many still struggle to establish a complete national inventory or to guarantee frequent updates of their NFI owing to budget constraints, inaccessibility of areas, institutional circumstances, developing capacities, internal conflicts, lack of initial information on forest resources and the like. This has limited the usefulness of their national aboveground biomass (AGB) estimates and ultimately hinders the accuracy, completeness and consistency of reporting country greenhouse gas (GHG) emissions and removals under the international climate frameworks as well as the formulation of domestic mitigation targets.
At the same time there has been significant progress in developing large-area biomass density maps to characterize the distribution of forest carbon stocks around the globe. These efforts are further developed through dedicated space-based biomass missions that increasingly provide open access and continuous coarse-scale biomass information. The extent to which global information on space-based estimates of aboveground biomass (AGB) can support national forest biomass monitoring and GHG accounting is still under investigation. In the presentation we cover the approaches and assess whether the use of a global biomass map as a source of auxiliary information can produce a gain in precision of sub-national AGB estimates. For that purpose, we made use of model-assisted estimators that best accommodated the country NFI sampling design, and we explored hybrid inferential techniques to account for additional sources of uncertainty associated with the integration of the remote sensing products (in our case the ESA CCI biomass map) and the NFI plot data. We will present results from two case studies with rather different NFI designs and forest characteristics: Peru and Tanzania.
For the case of Peruvian Amazonia, we found that the CCI biomass map tends to overestimate AGB across the Peruvian Amazonia. The most striking result was that, after calibrating the map using NFI data, it substantially increased the precision of our model-assisted stratum-wise AGB estimates by as much as 150% at the strata-level, and 90% at the Amazon level. even though the initial relationship between the plot-based biomass and space-based estimates of biomass varied for different strata and was not that strong for some strata. Our results show that more precise AGB estimates can be achieved by integrating NFI and space-based biomass data (i.e. through a locally calibrated remote sensing product) in the tropics. In the hybrid inferential analysis, we found that the very different spatial support for the NFI plots compared to the remote sensing-based units contributed 86% of the total uncertainty in the map-NFI integration. The uncertainties concerning the reference data at plot level owing to measurement error and allometric model prediction uncertainty contributed 13%. When accounting for these uncertainties, the precision of our AGB estimates was still increased by as much as 90% and 60% at the stratum and Amazon levels, respectively. Our results show that Peru could benefit from the application of global biomass maps that contribute to more precise and complete AGB estimates for GHG monitoring and reporting, particularly while the NFIs is still incomplete.
The analysis for the case of Tanzania follows a similar framework but the estimation procedures have been modified because of the different NFI plot and sampling designs. The quantitative results are not final at the time of abstract submission but will be included in the presentation to compare both country conditions.
Terrestrial CO2 fluxes from land use, land use change, and forestry (LULUCF) accounted for about 12% of total anthropogenic CO2 emissions in the last 20 years, while land simultaneously provided a natural sink for about 29% of all CO2 emissions (Friedlingstein et al., 2021). Comparisons of anthropogenic LULUCF emissions in global models and in country reports to the United Nation Framework Convention on Climate Change (UNFCCC) revealed a substantial gap between both estimates, globally amounting to about 5.5 Gt CO2/year (Grassi et al., 2021). This gap was mainly attributed to discrepancies in areas considered as managed land in models and country reports, and to the (partial) inclusion of natural CO2 fluxes (e.g. from carbon removals by CO2 fertilization, forest fires, insect outbreaks) on managed land in many of the country reports (Grassi et al., 2021).
Building on this proposed explanation, we provide a disaggregation of country-reported CO2 emissions from LULUCF into contributions from anthropogenic and natural CO2 fluxes on country level, considering eight countries with high emissions from LULUCF. We focus on natural fluxes in managed forests since the majority of natural CO2 removals occurs in forested areas. For each country we use process-based models to estimate the natural CO2 fluxes on managed forests (which we identify through a mask of non-intact forest due to lack of information on the spatial distribution of managed forests in the country reports) and add them to the model-based LULUCF emission estimates. This approach is in line with the methodologies used by almost all investigated countries to estimate CO2 fluxes from LULUCF, which imply that natural fluxes on managed land are included in their CO2 flux estimates.
In the majority of the eight countries investigated, including natural fluxes on managed forests substantially reduces the gap between model estimates and country reports of LULUCF emissions, highlighting that the methodology suggested by Grassi et al. (2021) provides a feasible approach to make estimates more compatible also at the country level. Countries include about half of the domestic natural CO2 fluxes in their LULUCF emission reports, which shifts the reported CO2 fluxes downward, i.e., towards lower emissions or, accordingly, towards larger sinks. Large gaps present in Russia and the USA can be closed almost completely by adding natural fluxes on managed forests to model-based LULUCF emissions, revealing that the CO2 sinks from LULUCF reported by these countries are predominantly due to natural fluxes on managed forests. Also in the EU, Indonesia and China, which is the country that has the largest gap and reports the largest CO2 sink of the investigated countries, the gap is considerably reduced by including natural CO2 fluxes on managed forests. These results highlight that the methodological discrepancies between country reports and model estimates of LULUCF emissions are primarily due to accounting definitions and need to be reconsidered in a proper assessment of the country contributions to the global climate mitigation targets, as planned in the Global Stocktake in 2023.
While the presented approach provides an important step forward in bringing together model estimates and country reports, it does not achieve a complete closing of the gap. For some countries, estimates from models and country reports still differ substantially, such as for China, where differences might be due to overoptimistic estimates of the actual effects of afforestation on CO2 fluxes in the country report, underestimations of the afforested area in the input datasets used by models to calculate LULULCF fluxes, and/or limitations in the capability of models to fully integrate the large-scale afforestation in China. There are several potential reasons for the remaining gaps, including incomplete reporting by countries, uncertainties in historical land use dynamics, and model limitations. Moreover, most countries report the areas considered as managed without explicit information on their location, which prevents a precise spatial identification necessary for correctly quantifying natural fluxes on managed forests in models. For some of these factors, remote-sensing products might provide independent and spatially explicit estimates through satellite-derived classifications of land use and land cover change, quantifications of changes in biomass, or identification of managed forest areas. Additionally, the near real-time availability of satellite data might be useful for providing a temporal extension of country reports, which are usually published with a lag of several years. Remote-sensing products might thus constitute an additional, strong pillar in establishing a sound and viable methodology for a translation between model estimates and country reports of anthropogenic CO2 emissions from LULUCF.
References:
Friedlingstein, P., Jones, M. W., O'Sullivan, M., Andrew, R. M., Bakker, D. C. E., Hauck, J., Le Quéré, C., Peters, G. P., Peters, W., Pongratz, J., Sitch, S., Canadell, J. G., Ciais, P., Jackson, R. B., Alin, S. R., Anthoni, P., Bates, N. R., Becker, M., Bellouin, N., . . . Zeng, J. (2021). Global Carbon Budget 2021. Earth Syst. Sci. Data Discuss., 2021, 1-191. https://doi.org/10.5194/essd-2021-386
Grassi, G., Stehfest, E., Rogelj, J., van Vuuren, D., Cescatti, A., House, J., Nabuurs, G.-J., Rossi, S., Alkama, R., Viñas, R. A., Calvin, K., Ceccherini, G., Federici, S., Fujimori, S., Gusti, M., Hasegawa, T., Havlik, P., Humpenöder, F., Korosuo, A., . . . Popp, A. (2021). Critical adjustment of land mitigation pathways for assessing countries’ climate progress. Nature Climate Change, 11(5), 425-434. https://doi.org/10.1038/s41558-021-01033-6
The need for accurate information to characterize the dynamics of forest cover at the tropical scale is widely recognized particularly to assess carbon losses from deforestation and forest degradation (Achard & House 2015). In particular, the contribution of the degradation is a key element for REDD+ activities and is still missing in most national reports due to lack of reliable information at such scale. The main scientific gaps in the estimation of carbon losses at the pantropical scale from Earth Observation data remain on (i) integrating temporal dynamics of various types of forest disturbances; (ii) assessing and combining uncertainties from ‘activity data’ (deforestation and degradation) and from emission factors (in particular in relation to biomass maps and degradation processes).
To address these gaps, we use a wall to wall tropical moist forest change product (Vancutsem et al. 2021, hereafter called TMF) at 30 m resolution, which depicts both deforestation and degradation over the past 3 decades and allow a better understanding of the interlinkage between the two processes. We combine the TMF annual changes with a pan-tropical map of aboveground biomass density (AGB) at 30 m resolution valid for year 2000 (Harris et al. 2021) to quantify the annual losses in above-ground carbon stock associated with degradation and deforestation in tropical moist forests over the period 2001-2020. The carbon loss due to direct deforestation is accounted as full carbon loss while for degradation we consider partial loss of the initial carbon stocks with a range from 20 to 75% depending on the intensity of degradation processes (Andrade et al. 2017). We also account for induced carbon losses from forest fragmentation and edge effects as an indirect consequence of deforestation (Silva Junior et al., 2020) by considering that forest areas in a 120m edge lose 50% of their biomass linearly over a 50 years period (Brinck et al. 2017). Carbon losses due to deforestation happening after a prior degradation event or edge effect are accounted as the carbon stock remaining after degradation or fragmentation effect. Finally we assess the sensitivity of our approach by replacing the AGB map of year 2000 from Harris et al (2021) with the ESA CCI Biomass map for year 2010 at 100m resolution (Santoro et al. 2021).
This approach allows to produce estimates of annual loss in above-ground carbon stock associated with deforestation and degradation in tropical moist forests. Our initial results show that deforestation and forest degradation led to losses of 580 TgC / year and 365 TgC / year respectively for the period 2011-2020 when using the Harris et al. map, or 395 TgC / year and 267 TgC / year respectively when using the ESA CCI map. These estimates show a lower contribution of degradation to the total carbon loss than recently reported from a coarser resolution study (Baccini et al 2017), i.e. 38% (Harris et al.) or 44% (ESA CCI) from our study versus 69% for period 2003-2014 from this previous study. Further analysis will be performed to better assess the sensitivity of the estimates to the AGB maps and scenarios used for the degradation and fragmentation processes.
We intend to quantify the uncertainties of area estimates from the TMF product following GFOI guidelines (GFOI, 2020). To carry out this accuracy assessment, a stratified random sampling scheme is used to create a reference dataset of 6000 sample plots (with Landsat pixel size). The sample is optimized to target omission errors of disturbances in stable forest areas (by using a buffer zone around changed strata) and commission errors in disturbed forest areas. For each sample plot, the most recent high resolution image that is available from Google Earth platform, or from Planet and Sentinel 2 databases is visually interpreted as well as the time series of available Landsat images from 1990 to 2020. The reference sample will be used to produce unbiased estimates of activity data with uncertainty range and related estimates of Carbon losses through the combination of the sample with biomass maps.
References
Achard F, House JI (2015) Reporting carbon losses from tropical deforestation with Pantropical biomass maps. Environ. Res. Lett. 10, 101002
de Andrade RB et al. (2017) Scenarios in tropical forest degradation: carbon stock trajectories for REDD+. Carbon Balance Manage 12, 6.
Baccini A et al. (2017) Tropical forests are a net carbon source based on aboveground measurements of gain and loss. Science 358, 230–234
Brinck K et al. (2017) High resolution analysis of tropical forest fragmentation and its impact on the global carbon cycle. Nat Commun 8, 14855.
Hansen MC et al (2013) High-resolution global maps of 21st-century forest cover change. Science 342, 850–853
GFOI (2020) Methods and Guidance from the Global Forest Observations Initiative Edition 3.0.
Harris NL et al (2021) Global maps of twenty-first century forest carbon fluxes. Nature Climate Change 11, 234–40
Santoro, M.; Cartus, O. (2021): ESA Biomass Climate Change Initiative (Biomass_cci): Global datasets of forest above-ground biomass for the years 2010, 2017 and 2018, v2. Centre for Environmental Data Analysis, 17 March 2021. doi:10.5285/84403d09cef3485883158f4df2989b0c.
Silva Junior CH et al (2020) Persistent collapse of biomass in Amazonian forest edges following deforestation leads to unaccounted carbon losses. Science Advances.
Vancutsem C et al (2021) Long-term (1990–2019) monitoring of forest cover changes in the humid tropics. Science Advances.
The International Charter ‘Space and Major Disasters’ (“Charter”) has been founded in 1999 with three initially participating space agencies: ESA, CNES and CSA (from Europe, France, and Canada). It started its operations in November 2000 with the goal to provide quick access to satellite-based information in cases of major disasters, free of any cost for the user. In 21 years, the Charter has been activated more than 730 times worldwide and covered disasters in 129 countries (as of Nov 2021). The Charter is now composed of 17 agencies worldwide (ABAE, CNES, CNSA, CONAE, CSA, DLR, ESA, EUMETSAT, ISRO, INPE, JAXA, KARI, NOAA, ROSCOSMOS, USGS, UAESA, UKSA).
The Charter covers a large variety of disasters, both natural and man-made. In about 50% of the cases Charter activations are for flood events, including tsunamis. In addition, tropical storms, earthquakes, fires, landslides, volcanic eruptions, ice jams, oil spills, and large industrial accidents are being covered. The Charter is functioning on a best-effort basis.
The Charter illustrates the fact that Earth-observing satellites are able to deliver key information that brings benefit to the definition, planning, implementation and monitoring of disaster relief operations when human life and infrastructure are at stake. Its service is user-driven, i.e. the Charter becomes active after being triggered by an “Authorized User” (AU). Typically, Charter AUs are national disaster management agencies or emergency operations units. Besides these AUs, the Charter can also be triggered by Cooperating Bodies from the UN family in order to make Charter support available for user organisations within the UN, the Red Cross/Red Crescent, as well as countries that do not yet have a Charter AU.
The “Universal Access” policy allows national disaster management agencies, which do not yet have Charter access, to register with the Charter and become an AU after a training. It is also encouraged that countries struck by a disaster bring in their own EO expertise in case of an activation, i.e. Charter activations can be managed by and/or maps generated by in-country capacities. The Charter support is free of charge and based on international collaboration among the Charter member agencies and their humanitarian motivation.
This paper will give an overview of the Charter’s work with a focus on user aspects. Various examples of information products resulting from Charter activations will be shown, with a special focus on products based on synthetic aperture radar (SAR) satellite data.
The idea for the International Charter ‘Space and Major Disasters’ came into being at the third UN space conference UNISPACE III held in Vienna in July 1999. In the face of increasing destruction and damage to life and property caused by natural disasters and conscious of the benefits that space technologies can bring to rescue and relief efforts, the European Space Agency (ESA) and the Centre national d'études spatiales (CNES) set out to establish the text of the Charter, which they themselves signed on 20 June 2000, while inviting other space agencies to do the same. The Canadian Space Agency (CSA) was the first to come onboard and sign the Charter on 19 October 2000. These three founding space agencies then went on to establish the architecture essential for implementing the Charter.
As one of the three founding members of the Charter, CSA will provide in this paper an overview of the Charter’s history, the role played by CSA in its development, and highlight the notable Charter milestones in its evolution. Examples of Charter images that have provided valuable assistance to disaster relief efforts will be presented. The benefits and strengths of the Charter from a Canadian perspective will be examined. The paper will conclude by discussing some of the Charter’s future challenges.
The Sendai Framework for Disaster Risk Reduction (SFDRR) provides an internationally agreed agenda for evidence-based policy with the overall aim to achieve progress in disaster risk reduction (DRR). Main components of the framework are four priorities for action and seven Global Targets to monitor progress in DRR based on thirty-eight indicators. However, the progress of monitoring the set targets is very often obstructed due to a lack of available, accessible, and validated data on disaster-related loss and damage, especially in developing countries. This weakens the accuracy, timeliness, and quality of the SFDRR monitoring process. In the case of floods, which account for the highest number of people affected by hazards, there is a strong need for innovative and appropriate tools for monitoring and reporting impacts, for which Earth Observation (EO) can provide solutions. Previous attempts to address this gap via a geospatial model approach could not be validated due to a lack of in-situ measured loss and damage reference data. The presented research here addresses this gap by developing a geospatial model approach that was validated against reference data provided by partner national institutes from the country of Ecuador. The methodology in this study aimed at combining EO-based information products with additional geospatial data to result in quantitative measures for indicator B-5a of the SFDRR and validate them against the reference data. A semi-automated derivation of flood event characteristics from a full year of Sentinel-1 synthetic aperture radar data was applied to three Ecuadorian focal provinces that best represented the ecological diversity of the country in order to assess flood hazard. An automated thresholding algorithm was applied to delineate flooded areas which yielded flood statistics when applied to Sentinel-1 data for an entire reference year. This assessment was complemented by census and agricultural in situ data to spatially model SFDRR indicator B-5a for the year 2017. The validation procedure involved various steps to produce different models that combined elements of flood exposure and vulnerability to be cross validated with the reference data. A statistical analysis was used to assess the agreement of the models with reference data and their ability to reproduce the reference data. The validation procedure resulted a geospatial model, which integrated also flood vulnerability and has high agreement with the reference data. However, the models were sensitive to different ecological regions.
This validated geospatial model approach, is - to the best of the authors’ knowledge - the first attempt to validate geospatially measured Sendai indicators against reference data. The derivation of open source information products was conducted in close collaboration with the National Service for Risk and Emergency Management of the Government of Ecuador, the Ecuadorian Ministry of Agriculture, the Ecuadorian National Institute for Statistics and Census, the Ecuadorian country office of the United Nations Development Program, and the United Nations Office for Disaster Risk Reduction. It is thereby assured that the development and validation of this methodology is in line with the Ecuadorian national and the international approach of implementing the SFDRR. The information products were shared with partner institutions through a “training-of-trainers” capacity building workshop which enabled the sustainable transfer of results and learning outcomes for Sendai Monitoring.
Consequently, this validated geospatial model approach provides an opportunity to support countries without information on disaster-related loss and damage in monitoring indicator B-5a of the SFDRR. Due to its retrospective ability to assess loss and damage, a baseline measure of the indicator can be derived as a reference for monitoring progress. This approach also successfully integrates characteristics of vulnerability into the UNDRR methodology to better capture the heterogeneous nature of flood impacts. Future research should seek the application and modification of the developed and validated model for additional Sendai indicators and targets, and predominantly explore solutions to overcome the sensitivity of the models for different ecological regions.
Satellite imagery has great potential for monitoring and understanding the behaviour of volcanoes, especially at the 45% of potentially active volcanoes without ground-based monitoring. Here, we present results from Committee on Earth Observation Satellites’ Volcano Demonstrator. Our primary goal is to increase the uptake of satellite imagery, and especially Interferometric Synthetic Aperture Radar (InSAR), for volcano research and monitoring. We aim to support the use of satellite data for disaster risk reduction by providing otherwise restricted civilian data to volcano observatories, and by supporting capacity building. The Volcano Demonstrator developed from a 4-year Pilot (2013-2017) to demonstrate the usefulness of satellite data for monitoring large numbers of volcanoes. This pilot focussed on Latin American volcanoes and detected unrest not observed by ground based networks, as well as feeding into volcanic observatory decisions about alert levels and sensor deployments. Satellite measurements of deformation contribute both our knowledge of sub-volcanic magmatic zones, and to volcano monitoring, especially when used in combination with satellite measurements of thermal anomalies and volcanic gases.
Since 2019, the Volcano Demonstrator has targeted volcanoes that present the greatest risks and potential for useful remote sensing measurements in Latin America, SE Asia and Africa. In some cases, we have provided a route for scientists at volcano observatories to access CosmoSkyMed, TerraSAR-X and Pleiades imagery, while in others we have contributed to data processing and interpretation. We have responded to unrest and eruptions at volcanoes including in St Vincent, Guatemala, Peru, Chile, Indonesia, DRC and the Canary Islands. Research based on CEOS demonstrator data has included novel retrievals of topographic change during eruptions (Fuego 2018, St Vincent 2021, Nevados de Chillán) and forensic analysis of recent eruptions with high spatial resolution imagery (Agung 2017, Anak Krakatoa 2018). In addition, we coordinate the acquisitions, and where possible, the analysis, of ‘baseline’ high resolution radar images at active volcanoes to complement open datasets such as Sentinel-1. From our experience in the CEOS pilot and demonstrator programs, we believe that integrating multiple SAR platforms is critical for volcano monitoring because it maximises temporal coverage and because some volcanic events can only be observed at specific radar wavelengths, geometries or repeat times. Integration of these observations with satellite measurements of gas, thermal anomalies and the measurements of ground-based monitoring networks are critical for interpretation of these data.
The CEOS volcano demonstrator links volcano observatories around the world with experts in satellite remote sensing and space agencies. Our long-term aim is to make this project sustainable and to demonstrate the necessity of inclusive, international coordination of satellite imagery acquisition for volcanology.
On August 14th, 2021, a 7.2 magnitude earthquake hit the southern peninsula of Haiti. Two days later, Grace, a tropical storm, struck the three departments of Nippes, South & Grand’Anse. These combined events, and all aftershocks caused many landslides, especially along the Enriquillo-Plantain Garden fault. Following these disasters, emergency management services have been triggered such as the International Charter Space and Major Disasters and the Copernicus Emergency Management Service Rapid Mapping.
The third CEOS Recovery Observatory demonstrator was activated on September 6th, 2021 on behalf of the European Union, United Nations Development Programme (UNDP), and the World Bank, in charge of the Post Disaster Needs Assessment (PDNA). The priority was to estimate the number of landslides and their location throughout the peninsula. Taking into account the size of the affected area, the difficulty in accessing this region, it was decided that the best way to obtain and provide this information was through satellite imagery. Earth Observation provides an overall and factual view of the situation.
Many contributors assembled to provide a rich package of information in a rush and best effort mode. Automatic landslide extraction layers realized by NASA and BGC Engineering Inc. were provided. ICube-SERTIT as RO liaison officer, but also as a value-added producer and expert in Earth Observation aggregated, validated, and completed these data over the whole southern peninsula. Landslides were extracted from optical satellite imagery. The International Charter “Space and Major Disasters” provided a Pléiades-HR image over the national Reserve of Macaya Park located in one of the most affected areas. In order to cover the whole peninsula, Sentinel-2 and Landsat-8 data were used.
The Haitian National Center of Geospatial Information (CNIGS) shared several reference datasets (land use / land cover maps, Digital Elevation Model…) which enabled the elaboration of statistical information concerning the landslides. Nearly 7.000 ha of landslides were mapped within the southern peninsula, especially affecting wood and bushy areas, in mountainous agro-ecological zones.
This landslide dataset and its related statistics helped the PDNA team to complete and precise their report. It represents also the principal input data to the second RO phase’s products, which ends at the end of February 2022. To support the reconstruction phase, RO will provide products aimed at user needs concerning land cover and the hydrological changes induced by these disasters. Furthermore, the computation of a soil erosion susceptibility index is planned to identify new risks in the areas that need to recover.
To address the needs of the Haitian community in the south-west of the country involved in recovery and rehabilitation after the impact of Hurricane Matthew in October 2016, the Committee on Earth Observation Satellites (CEOS) triggered the 4 year-long Recovery Observatory (RO) pilot project, led by the National Center for Geo-spatial Information (CNIGS) with technical support from the French National Centre for Space Studies (CNES) [www.recovery-observatory.org].
During the RO pilot, the Italian Space Agency (ASI) contributed with scientific research aiming to develop – jointly with the RO management and technical team from CNES and CNIGS – a workflow which encompassed: (i) tasking of satellite high resolution Synthetic Aperture Radar (SAR) data for regular observations of the priority areas defined by the Haitian users; (ii) image processing, also testing computing and algorithm resources that could ensure future sustainability; (iii) generation of value-added geohazard products providing information on terrain motion and change detection; and (iv) in-situ validation. In addition, a basic SAR image course was given to the URGEO master in Université d’État d’Haiti (UEH). The motivation to focus on SAR data (to complement more consolidated techniques based on optical satellite imagery) was provided by the Haitian partners’ expressed need to approach the domain of SAR remote sensing and interferometry (InSAR) that, on one side, bring well-known advantages for land applications and disaster risk reduction in tropical regions but, on the other, require computing facilities, training and capacity building to be feasibly used as sources of geospatial information.
The RO pilot was successfully completed at the beginning of 2021. A final workshop was held by involving all the space agencies, national champions and users who collaborated to demonstrate the benefits of better integrating information from satellite imagery and their derived products into the post-Matthew reconstruction and recovery process. That event provided the opportunity to reflect collectively on the lessons learnt and the legacy of the RO pilot experience.
The present paper aims to contribute to the key objectives of ESA Living Planet Symposium session D1.05 “International Collaboration to better understand risks using satellite EO (GEO, CEOS, etc.)”, by:
- sharing the technical achievements and challenges in the use of repeated SAR data from high revisit sensors (e.g. Sentinel-1) and on-demand acquisitions from high resolution sensors (e.g. COSMO-SkyMed) for terrain motion and land surface change applications;
- highlighting the role that the collaboration with users and stakeholders can play to add value to SAR-based scientific products.
For the purposes of a wide-area regional analysis, Sentinel-1 data were processed and interferometric products were generated using ESA’s Geohazards Exploitation Platform (GEP) [Cigna et al. 2020]. In particular, we will showcase the value of accessing such infrastructure and its hosted processing routines, discuss the impact of possible external constraints that may limit the exploitation by users (e.g. skill gap, limited internet connectivity), and outline possible actions towards an effective use of this resource (e.g. dedicated training).
In parallel, a bespoke campaign to monitor three priority areas defined by the Haitian users – i.e. Jérémie, Camp Perrin and Carrière Arniquet – was undertaken with ASI’s COSMO-SkyMed constellation using the Enhanced SpotLight mode, at 1-m spatial resolution, 16 days site revisit, in both ascending and descending modes, from December 2017 to December 2020 until the RO completion. These data were used to generate maps that allowed the identification of different categories of surface changes including:
(a) environmental, located along the estuarine section of the Grand’Anse River south of Jérémie and mixed with anthropogenic activities mostly related to quarrying and unregulated waste disposal [De Giorgi et al., 2021];
(b) geological, along the rock cliffs north-west of Jérémie where susceptibility of local lithologies to fracturing, toppling and lateral spreading may be worsened by the impact of hurricanes and storms, thus causing potential risks to small villages and isolated dwellings.
(c) urban, within the outskirts of Jérémie due to reconstruction, as well as new constructions, in areas where the Sentinel-1 InSAR analyses highlighted ground motions;
(d) rural, due to landslides to be distinguished by similar signals associated with agricultural practices along the slopes in the Camp Perrin.
Each of the above categories was validated based on ground-truth data collected during a technical mission that was jointly carried out by ASI, CNIGS, CNES, ICube-SERTIT and Bureau des Mines et de l'Energie d'Haïti (BME). Lessons learnt will be discussed in detail, in order to outline some recommendations on how to effectively integrate a range of SAR observations and products and pave the way for their embedding into the decision making process for recovery and resilience building.
In this regard, the discussion will also encompass the analysis of the feedback received by Haitian stakeholders (e.g. the Civil Protection, mayors of the municipalities affected by the hurricane Matthew, the Comité Interministériel d'Aménagement du Territoire) during the dedicated workshops that were held in Jérémie and Port-au-Prince to present these SAR and InSAR derived products, as well as from the group discussion at the final workshop. Among the feedback:
- the integration between the Sentinel-1 InSAR ground motion products and the COSMO-SkyMed-based urbanization map was positively assessed for purposes of urban planning, as a satellite evidence-base to highlight areas where cascading hazards may be triggered and thus new urbanization is not recommended;
- in light of the proven benefits of InSAR and SAR change detection techniques, there is the need for capacity building not only to transfer knowledge, but also to create a technical capability (also through the involvement of the local university) to exploit the RO pilot technical legacy after the project completion.
References:
- Cigna, F.; Tapete, D.; Danzeglocke, J.; Bally, P.; Cuccu, R.; Papadopoulou, T.; Caumont, H.; Collet, A.; de Boissezon, H.; Eddy, A.; Piard, B.E. Supporting Recovery after 2016 Hurricane Matthew in Haiti With Big SAR Data Processing in the Geohazards Exploitation Platform (GEP). Proceedings of 2020 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Waikoloa, HI, USA, 26 September–2 October 2020; pp. 6867–6870. https://doi.org/10.1109/IGARSS39084.2020.9323231
- De Giorgi, A.; Solarna, D.; Moser, G.; Tapete, D.; Cigna, F.; Boni, G.; Rudari, R.; Serpico, S.B.; Pisani, A.R.; Montuori, A.; Zoffoli, S. Monitoring the Recovery after 2016 Hurricane Matthew in Haiti via Markovian Multitemporal Region-Based Modeling. Remote Sensing 2021, 13 (17), 3509. https://doi.org/10.3390/rs13173509
Description:
Due to the pandemic, according to UNESCO’s World Heritage Committee, as of early April last year, 71% of the 1,121 World Heritage sites had been closed, while 18% were only partially open. The unprecedent experience of the pandemic changed, and is still changing, the way we look at, understand and use our Heritage. What future and which technologies can reshape the awareness, preservation and fruition? There is the need to seriously start considering business models for Heritage that is becoming an emerging market for EO thanks also to the integration of multiple data sources (space and non-space) and innovative techniques (LiDAR, drones, local laser scanning, VR, XR, Artificial Intelligence, crowdsourcing etc.). At the same time, available downstream services are still acting independently, making their interoperability and potential synergies still challenging.
To date, Copernicus and Contributing Missions are already providing certain support for the management of Heritage during emergencies (especially in case of geo-hazards), as well as the mapping, monitoring, preservation of cultural heritage as a daily routine. However, this is only the “tip of the iceberg”, there are an additional range of geo-applications that can provide benefit and therefore related geo-business in the area of Heritage. The engagement of multi- and inter-disciplinary communities to fill the gap between experts (remote sensing, Cultural Heritage managers, AI experts, social scientists, civil protection and actors from impact sectors) represents a key factor for strengthening the communication and the collaborations between EO experts and Heritage managers as well as the connection between the data providers and the end-users /site managers.
Education and capacity building about the use of geo-applications to support heritage is an additional emerging area that will be also included in the proposed Agora.
While Digital Twin Earth will without doubt be extremely beneficial for humankind, its implementation is complex due to the tremendous amount od data involved. A “Digital Twin Heritage Site” implies the same technologies, but is however much more affordable due to its smaller size and the possibility to do in-situ verifications. This implies that selected heritage sites could be used to strengthen the know-how about Digital Twin Earth, and at the same time will open a certain market related with the 3D-and 4D modelling, visualization and presentation of selected heritage sites.
Therefore, is Heritage a candidate for the Digital Twin Earth? How Interdisciplinarity can support the generation of processes and practices for the use of technologies for Heritage? This session will focus on how to increase the awareness, the capacity building effectiveness, and the understanding of how innovative technologies can efficiently meet the needs of Heritage authorities, Tourist authorities, Park Rangers, as well as the overall research community.
Speakers:
• Günter Schreier, German Aerospace Center (DLR) German Remote Sensing Data Center Direction
• Dr. Jyoti Hosagrahar, Deputy Director for the World Heritage Center at UNESCO
• Dr. Gerasopoulos Evangelos, Institute for Environmental Research and Sustainable Development, Greece; Research Director
• Dr. Sarah Parcak, Professor in the Dept of Anthropology at the University of Alabama at Birmingham, USA
• Prof. Elizabeth Brabec, Landscape Architecture and Regional Planning,
University of Massachusetts Amherst and Secretary General of International Scientific Committee on Cultural Landscapes of ICOMOS/IFLA
Description:
As part of its efforts to further improve the reliability and timeliness of its reporting to member states, the Food and Agriculture Organization (FAO) has partnered with the European Space Agency (ESA) to improve the exchange of expertise, knowledge and relevant data for the joint development of Earth Observation applications responding to the mandate of FAO. The two organisations have signed a Memorandum of Understanding which will facilitate synergies of R&D efforts and to scale up solutions in particular the FAO capacity development work aimed at enabling countries in the use of Earth Observations for agricultural statistics and SDG monitoring, by allowing the standardization of methods/applications, increasing results accuracy and sustainability of solutions.
The LPS22 Agora will discuss with FAO experts working in different thematic domains the following topics:
•Requirements and challenges for using satellite Earth Observation data;
•Exchange of data sets from integrated household/field surveys, essential for calibration and validation of Earth Observation models;
•Developing innovative Earth Observation algorithms, products and applications relevant for the mandate of FAO making full use of latest IT capabilities, such as cloud computing;
•Demonstrating and validating Earth Observation capabilities for data generation under FAO’s mandate.
Speakers:
o Moderated panel discussion:
o Opening: Welcome and keynotes
Maurice Borgeaud, Head of the Science, Applications & Climate Department – European Space Agency (ESA)
Pietro Gennari, Chief Statistician – UN Food and Agriculture Organisation (FAO)
o Moderated panel discussion: Moderator Benjamin Koetz, Head of Sustainable Initiatives Office, ESA
Livia Peiser – FAO, Land and Water Division
Lorenzo DeSimone – FAO, Statistic Division
Erik Lindquist – FAO, Forestry Division
Pierre Defourney, Earth & Life Institute - Université Catholique Louvain
Radoslaw Guzinski, DHI-GRAS
Moderator Benjamin Koetz, Head of Sustainable Initiatives Office, ESA
Livia Peiser – FAO, Land and Water Division
Lorenzo DeSimone – FAO, Statistic Division
Erik Lindquist – FAO, Forestry Division
Pierre Defourney, Earth & Life Institute - Université Catholique Louvain
Radoslaw Guzinski, DHI-GRAS
Description:
The New EU Forest Strategy is aimed at protecting forest ecosystems to provide a healthy future for people, planet and prosperity by ensuring healthy, biodiverse and resilient forests across Europe and the world.
European forests are under increasing strain, partly as a result of natural processes but also because of increased human activity and pressures. Climate change has also brought to light previously hidden vulnerabilities aggravating other destructive pressures such as pests, pollution and diseases.
The new EU Forest Strategy aims to overcome these challenges and unlock the potential of forests for our future, in natural and urban environments. In particular it is anchored in the European Green Deal and the EU 2030 Biodiversity Strategy and it recognises the central and multi-functional role of forests for achieving by 2050 a sustainable and climate-neutral economy while ensuring that all of ecosystems are restored, resilient, and adequately protected.
Spontaneous forest regrowth through natural succession is the main force driving the increase of forested areas in the EU, mostly associated with abandonment of agriculture and rural areas. But additionally, there is potential for extending forest and tree coverage in the EU through active and sustainable re- and afforestation and tree planting.
This concerns mainly urban and peri-urban areas (including e.g. urban parks, trees on public and private property, greening buildings and infrastructure, and urban gardens). It is important to capitalise on this potential, as enhanced afforestation is also among the most effective climate change and disaster risk mitigation strategies in the forest sector, and can create substantial job opportunities, e.g. in relation to collecting and cultivating of seeds, planting seedlings, and ensuring their development, as well as providing socio-economic benefits to local communities. Also, exposure to green and forested areas can greatly benefit people’s physical and mental health.
The EU Biodiversity Strategy for 2030 sets outs a pledge to plant at least 3 billion additional trees by 2030 in full respect of ecological principles of planting and growing the right tree in the right place and for the right purpose.
The roadmap sets out clear criteria for tree planting, counting and monitoring, which will be essential to track progress for meeting the target. This will build on the expertise of the Commission and the European Environment Agency to provide assessments of trends and the state of implementation.
This Agora will focus on the New EU Forest strategy and the urban green with various contributions from EFI, City of Bonn, UN/FAO, AlberItalia foundation, as well as ESA and GMATICS.
SPEAKERS:
-Robert Mavsar (European Forest Insitute), Bonn City Representative
-Prof. Fabio Salbitano (University of Florence)
-Michela Conigliaro (UN/FAO)
-Klaus Scipal (ESA)
-Prof. Marco Marchetti (AlberItalia Foundation)
Company-Project:
eOdyn - Space4SafeSea
Description:
Accurate, high-resolution estimate of ocean surface currents is both a challenging issue and a growing end-user requirement. Yet, the global circulation is only indirectly monitored through satellite remote sensing; to benefit the end-user community (science, shipping, fishing, trading, insurance, offshore energy, defense), current information must be accurately constructed and validated from all relevant available resources. eOdyn develops since 2015 a transformative method to derive surface currents from ship motion and Automatic Identification System (AIS) data [1][2]. Currents, derived from AIS data, a complementary in- situ observing system so far under-exploited, have the potential to complete surface current picture with high- frequency part of ocean dynamics in the areas with intensive marine traffic activities. The presentation will focus on recent results, using AIS data and ship behaviour analysis to produce reliable high resolution ocean surface current measurements to monitor different currents of interest (off the south African coastline, the Indian ocean and the Mediterranean sea). Comparisons between AIS derived surface currents and independent data sets from altimetry satellites, HF radars and drifters will be presented. The use of this new technology to complement existing measurement systems will be demonstrated.
Description:
Following the publication of the Intergovernmental Panel on Climate Change’s Sixth Assessment Report (AR6), leading researchers and policy makers discuss the best available bio-geophysical science underpinning past, present, and future climate change and the need for systematic observations from space.
Speakers:
• Simonetta Cheli (Director of Earth Observation Programmes and Head of ESRIN affiliation)
Chair/Moderator :
• Anna Pirani ( IPCC WG1 Technical Support Unit )
Panel :
• Sonia Seneviratne (ETH Zurich)
• Richard Jones (UK Met Office)
• Marie-Fanny Racault (University of East Anglia)
• Inge Jonckheere (FAO)
•Han Dolman (Director of Royal Netherlands Institute of Sea Research) (NIOZ)
Company-Project:
VITO - openEO platform
*****FOR THIS SESSION BRING WITH YOU YOUR LAPTOP OR TABLET*****
Description:
This training will use a working Python code example to demonstrate how openEO can support a deep learning and classical machine learning use case. The use case will show how to extract agriculture parcel boundaries from Sentinel-2 input.
Learning goals:
• Collecting training data with openEO
• Training and inference of a random forest model
• Integrating a Tensorflow based model in a user defined function for inference in openEO.
Prerequisites:
• Basic understanding of ML concepts
• Python programming
Description:
Purpose
The network event aims to bring together both sectors Space and Agriculture. This applies to various actors, including scientists, start-ups, small and medium-sized enterprises (SMEs) and large companies, universities and research institutions, as well as associations, ministries and public authorities.
The main purpose is to support policy making by illustrating the enormous potentials of Earth Observation concerning climate reporting in the agricultural sector of Germany and the EU.
Context:
The INNOspace network Space2Agriculture (www.space2agriculture.de) provides a communication platform between the space sector and agriculture/forestry. The objective is to establish cross-industry networking and consolidate synergies. New commercialization potentials are to be identified, technology cooperations initiated and joint projects created. The exchange with other industries facilitates the view for new ideas and enables product and process innovations through an actively pursued technology transfer.
The network offers the opportunity to exchange ideas and initiate projects with respect to:
• Earth Observation, Satellite Communication, and Satellite Navigation;
• Technology transfer between space and agriculture (spin-offs and spin-ins);
• Space-based services in support of biodiversity protection and sustainable agriculture as well as climate change, food security and policy-making.
Scope
The open 1h session consist of two modules:
a) three talks (e.g. EO/agriculture scientist, EO/agriculture company, representative of the German Federal Ministry of Food and Agriculture),
b) a panel discussion, or an interactive workshop.
Relevance to LPS22
The event is particularly related to these main topics of the LPS:
- Nurture public & private partnerships (-> bring these sectors together)
- Empower the green transition (-> policy making)
- Advance future technology for EO missions (-> user consultation)
Programme:
-09:00 Space2Agriculture: Welcome & Introduction:
Dr. Robin Ghosh (Project Leader Space2Agriculture, Department Innovation & New Markets, German Space Agency at DLR)
-09:10 ICT-AGRI-FOOD network: digital enabled, sustainable
and transparent agri-food value chains: Dr. Johannes Pfeifer
(Coordinator ICT-AGRI-FOOD, European Research Affairs,
Federal Office for Agriculture and Food (BLE)
-09:20 Intelligent Agricultural Systems and increased necessity for
environmental information: Dr. Thilo Steckel
(Advanced Engineering, CLAAS E-Systems GmbH)
-09:30 Spatial value to resilient agricultural production: contributions
from the Space industry sector: Dr. Axel Relin (Head of Agriculture, GAF AG)
-09:40 Linking digital transition and the transition towards
sustainable food systems: Dr. Bettina Baruth (Deputy Head of Food Security Unit, Joint Research Centre, European Commission)
-09:55 Panel discussion of the speakers + Q&A with the audience
-10:30 World Café
1) Digital Transformation in Agriculture (moderated by BLE)
2) Climate Change and Food Security (moderated by German Space Agency)
3) Sustainable and Biodiversity-friendly Agriculture (moderated by ESA)
-11:00 End
Company-Project:
i-Sea - ESA Coastal Erosion
Description:
During this demo session, we propose to showcase the Space for Shore geoportal promoting and distributing erosion monitoring products over more than 3000 km of coastlines across Europe. Hosted by Store4EO and built by Deimos, it includes free browsing, downloading and simple analysis facilities and an extensive portfolio of dedicated products derived from all exploitable satellite images recorded since the 1980s. This ample archive results from a 3-year collaborative project led by i-Sea company, realised the framework of the ESA Coastal Erosion project (spaceforshore.eu).
In this session we will demo to use the geoportal, exhibit the products and investigate its role for supporting decision making.
Company-Project:
Cloudferro
Description:
The demo will present the EO4UA bottom-up initiative that aims at supporting Ukrainian and international authorities in assessing environmental loses by provisioning CREODIAS processing capabilities combined with a large repository consisting of Earth Observation (EO) satellite data and higher-level products generated by end-users. Within the repository there will be “core” data sets (e.g. Sentinels’ imageries, crop classifications, boundaries of agricultural fields, etc.) which are indispensable for versatile environmental analyses. Results of analyses conducted by end-users, together with generated products, will also be stored within the repository to facilitate consecutive studies. Currently members of the EO4UA initiative are: Kyiv Polytechnic Institute, CloudFerro, Airbus, Cent UW with scientific from JRC and ESA support through the Network of Resources (NoR). More information about the EO4UA initiative can be found on: https://cloudferro.com/en/eo4ua/
Within the LPS 2022 demo it is planned to inform potential end-users about the data sets available through the EO4UA initiative and to present how they can be accessed and further analysed on the CREODIAS platform via JupyterLab (https://creodias.eu/creodias-jupyter-hub). It is also foreseen to show preliminary results on the monitoring of crop production and selected environmental components (to be determined). Intention of the EO4UA demo is also to facilitate networking between researches to allow for new project aiming at supporting Ukraine.
Duration : 30 Minutes
Description:
Meet the Permafrost scientist' and interact with ESA's animated globe
Description :
The importance of data in development policies and processes is increasingly being emphasised. The 2030 Agenda on sustainable development represents a major milestone towards development policies that are data driven and evidence based. Mainstreaming technical innovation to fill data gaps and mobilizing the data revolution to overcome inequalities between data-poor and data-rich countries are high priorities. The integration of geospatial information and Earth Observations with traditional statistical data, combined with new emerging technologies such as big data processing and analytics offer unprecedented opportunities to make a quantum leap in the capacities of countries to efficiently track all facets of sustainable development. With the recognition that data is at the heart of the SDGs comes the reality that the least developed countries will have the most difficulty with the related institutional and technical challenges.
From the adoption of the 2030 Agenda on sustainable development in 2015, geospatial Information and Earth Observations were presented as game-changers for countries to fully achieve their sustainable development goals. With the SDG agenda reaching its midway towards the 2030 milestone, the aim of the Agora Open forum on SDGs is to invite senior representatives from key organisations (Space Agencies, UN agencies, National Statistical Offices, Geospatial community, data brokers) to review progress on the uptake of EO in SDG processes and discuss the opportunities and challenges still lying ahead for successfully integrating EO technology within the national monitoring and reporting systems on SDGs.
The participants will discuss the EO achievements and challenges from their perspectives, which can be scientific, technical, programmatic, or policy-based in nature. The objective of the open forum is to raise awareness of the results achieved so far and to strengthen further the importance to join efforts to offer robust and cost-effective solutions that help countries better achieve their sustainable development goals, monitor progress towards their targets, inform development policies and ensure accountability and transparency.
Speakers:
Sara Minelli - UNCCD, Programme Officer
Stuart Crane - UN Environment, Programme Management Officer, Freshwater Unit(video)
Dennis Mwaniki - UN Habitat, Spatial Data Expert, Data and Analytics Unit(video)
Pietro Gennari - FAO, Chief Statistician
Sven Kaumanns - Federal Statistical Office of Germany (DESTATIS), Head of the environmental-economic accounting and SDGs
Argyro Kavvada - NASA , GEO EO4SDG Executive Director
Antje Hecheltjen - German Agency for International Cooperation (GIZ), GEO LDN co-chair
Emmanuel Pajot - European Association of Remote Sensing Companies (EARSC), Secretary General
The Forum will be moderated by:
Laurent Durieux - GEO Secretariat
Marc Paganini - ESA
Company-Project:
Cloudferro - DIAS
*****FOR THIS SESSION BRING WITH YOU YOUR LAPTOP OR TABLET*****
Description:
Introduction to CREODIAS” training is dedicated to everyone, who is looking for a platform that provides an easy access to Earth Observation data with integrated processing environment.
CREODIAS is one of the European DIAS (Data and Information Access Services) platforms, aimed at facilitating access to satellite data and ensuring the possibility of its processing in the cloud, as well as the creation of its own applications and services.
During the training attendees will learn step by step how to access CREODIAS platform, obtain satellite datasets and prepare cloud environment to perform data processing.
Following topics will be presented:
• CREODIAS platform architecture
• Copernicus data collections and Very High Resolution data available on CREODIAS
• CREODIAS user tools for browsing, selecting and processing EO products
• Setting up Virtual Machine
• Hands-on: Vegetation analysis in SNAP, using Sentinel datasets
Attendees should be CREODIAS registered users. Please follow this link to register: https://portal.creodias.eu/register.php
Description:
Supporting national action towards Paris Goals – the evolving role of observations
This session will provide an overview of EO capabilities and opportunities in relation to:
• Monitoring greenhouse gas emissions, sources and sinks
• The “globally local” agenda for EO applications
• Supporting actions and decision making frameworks to build climate resilience
Chair: Susanne Mecklenburg
Keynote and moderator: Yana Gevorgyan (Director of the GEO secretariat)
Panel
• Joanna Post (UNFCCC): Latest updates on the UNFCCC Paris Agreement 1st Global Stocktake, how to involve Earth Observation data and ensure consistency in the reporting across all countries
• Michaela Hegglin (University of Reading): Earth Observation in support for the UNFCCC Paris Agreement
• Chris Merchant (University of Reading): The ‘ globally local’ agenda - the major challenges for the future climate activities
• Chris Rapley (University College London): Creating Agency to Act
Aeolus was launched over three and a half years ago and the L2B winds have been operationally assimilated at ECMWF for over two years.
The latest results from the assessment of the impact of L2B HLOS winds in ECMWF’s global NWP Prediction system will be presented. In particular, the impact with the second reprocessed Aeolus data from July 2019 - October 2020, the early part of which Aeolus winds had the largest signal-to-noise ratio and hence largest positive impact. This early FM-B laser period helps to give an impression of what could at least be achieved with a potential operational ESA/EUMETSAT Doppler Wind Lidar follow-on mission, in which significantly better SNR than Aeolus is sought.
In Observing System Experiments (OSEs), Aeolus provides statistically significant improvement of a good magnitude in short-range forecasts as verified by observations sensitive to temperature, wind and humidity, peaking at ~200 hPa in the extratropics and ~150 hPa in the tropics. Longer forecast range verification shows positive impact strongest at the 2-3 day forecast range, e.g. ~2% improvement in root mean square error for vector wind and temperature in the tropical upper troposphere and lower stratosphere and polar troposphere. Positive impact of up to ten days is found in the tropical lower stratosphere wind and temperature. This impact appears to be larger with the 2nd reprocessed versus the 1st and the NRT datasets; even some impact on the 500 hPa geopotential height in the N. Hemisphere is found. Experiments with variational QC modifications and a bias correction of the Rayleigh-clear winds as a function of temperature will be presented if time permits.
The operational Forecast Sensitivity Observation Impact (FSOI) metric shows that Aeolus is a useful contribution to the global observing system; with the Rayleigh-clear and Mie-cloudy winds providing similar overall short-range forecast impact in 2020-2021. Relative FSOI for the 2019 reprocessed dataset shows that Aeolus is amongst the most important satellite instruments; a good result for a demonstration mission. The relative FSOI is reduce to ~2% in late 2021 versus ~5% when the maximum atmospheric path signal was available in July 2019 (offline testing) due to SNR decreasing. Also, Aeolus winds have been absent for large fractions of 2020 and 2021 due to being flagged invalid (blocklisted) during instrument testing (aimed at understanding/mitigating the signal loss).
The tropics are the region with the largest uncertainties in the initial states for numerical weather prediction (analyses). Analysis uncertainties are largest in the tropical upper troposphere and the lower stratosphere (UTLS). One of the reasons is a lack of wind profiles which are more useful than temperature profiles in the tropics. This classical dynamical effect was described by J. Smagorinsky as “Not all data are equal in their information-yielding capacity. Some are more equal than others.”
Despite of their relatively small number and a relatively large random error in the Rayleigh channel, Aeolus wind profiles have a positive impact on the quality of global weather forecasts, with the maximal positive impact in the UTLS. Here we discuss one process contributing to the forecast improvements in the ECMWF model: the vertically-propagating Kelvin waves which are a major contributor to tropical variability.
Previous work showed that short-range forecast errors project on Kelvin waves significantly more in the easterly phase of the quasi-biennial oscillation (QBO) than in the westerly phase. Furthermore, it was shown that missing variance associated with Kelvin waves explains a large part of underdispersiveness of the ECMWF ensemble prediction system in the medium range in the tropics. It is unclear how well Kelvin wave dynamics is represented in global climate models, as they still poorly simulate the QBO and Madden-Julian oscillation, and their connections.
By filtering the Kelvin waves from the ECMWF analyses with and without Aeolus winds, we demonstrate that Aeolus alters the vertical structure of Kelvin waves in the layers with the strongest shear within UTLS. Changes in the Kelvin wave zonal wind within UTLS by Aeolus data can be up to +/-4 m/s. Similar to the Kelvin wave dynamics, the impact of assimilated Aeolus winds varies with periods 1-3 weeks. The coupling between the phase of QBO and the Aeolus winds is argued to happen through the Kelvin waves. The studied period May-September 2020 was characterised by a weakening easterly phase of the QBO. We suggest that a greater improvement of ECMWF forecasts in the tropical tropopause layer by Aeolus winds in May 2020 than later in summer 2020 was associated with the state of the QBO.
ESA’s Doppler wind Lidar mission, Aeolus, is an important new satellite observing system. It provides global coverage of wind profile information from the surface to 10 hPa. ECMWF was the first numerical weather prediction (NWP) centre to operationally assimilate the Aeolus horizonal line of site (HLOS) wind information on January 9, 2020. Impact studies have proven that the assimilation of Aeolus wind observations improves the quality of NWP forecasts of wind, temperature, and humidity, particularly in the upper troposphere and lower stratosphere, with the main improvements seen in the Tropics. This shows that Aeolus provides a valuable dataset that complements the existing Global Observing System (GOS).
The quality of the Aeolus HLOS wind information has evolved since launch. Reduction in instrument performance has caused degradations. This has been compensated by improvements in the Aeolus ground processing software and reprocessing activities. To achieve a long timeseries of high-quality Aeolus wind measurements, the Aeolus DISC (Data, Innovation, and Science Cluster) carried out a second reprocessing campaign that provided a new, high quality dataset covering the period from July 2019 to October 2020.
An ongoing ESA-funded project at ECMWF is investigating if the assimilation of Aeolus L2B winds improves the predictability of high impact and extreme weather events. The focus is on tropical cyclones and European extra-tropical storms. Also, the impact of Aeolus wind assimilation on the European forecast bust statistics is under evaluation. For this purpose, NWP observing system experiments with/without Aeolus L2B winds are being performed using the second reprocessed dataset and the operational data. A set of European and tropical severe weather events from July 2019 until the end of 2021 has been identified using various databases. The impact of Aeolus wind assimilation on these case studies and on the occurrence of forecast busts will be evaluated.
In this presentation the preliminary results will be presented and discussed.
The jet stream represents a meridional barrier for air masses, but also energy fluxes. So-called "streamer events" in the upper troposphere / lower stratosphere are an example of how this barrier can be disrupted. During such events large-scale air masses from lower latitudes are irreversibly mixed into the circulation at higher latitudes with various consequences for atmospheric chemistry, energy and momentum balance. Streamers are the consequence of poleward planetary wave breaking, which modulate the jet stream and thus effecting the exchange of air masses and energy between the equator and the pole. Aeolus wind measurements allow the derivation of atmospheric wave structures on different temporal and spatial scales in particular above the oceans, where wind measurements from ground-based instruments are sparse and streamer events most likely occur.
We use Aeolus L2B wind measurements to derivate the planetary wave activity, so-called dynamical activity index (DAI). The DAI represents the mean amplitude of planetary waves up to wavenumber 10 in the mid-latitudes. A comparison of the DAI based on the Aeolus data and ERA-5 reanalysis data is presented. First case studies are shown addressing the structure of streamers and their relation to planetary wave breaking. Due to planetary wave breaking gravity waves might be excited at the flanks of streamers. The flanks of the streamers are characterized by comparatively strong wind shear. To identify the flanks of the streamer, and so possible source regions for streamers, we calculate the wind gradients along the track.
Since its launch in 2018, ESA’s Aeolus satellite provides global height resolved measurements of horizontal wind in the troposphere and lower stratosphere. It carries the world’s first spaceborne high spectral resolution wind lidar, the Atmospheric Laser Doppler Instrument (ALADIN). With its high-power laser, which operates at a wavelength of 354.8 nm, ALADIN can acquire measurements from roughly 30 km altitude down to either the ground or to the highest optically thick cloud layer. Besides the height resolved wind profiles, Aeolus also provides information on optical properties of clouds and aerosols.
The main objective of Aeolus is to improve numerical weather prediction (NWP). Multiple NWP centres have already shown the positive impact of Aeolus data and started its operational assimilation. However, detailed wind information is not only beneficial for NWP, but also for atmospheric dynamics research. Many dynamic features show characteristic wind patterns and/or are strongly influenced by the prevailing wind. Especially in the upper troposphere / lower stratosphere region, Aeolus measurements can provide valuable information for the investigation of dynamic features such as for example gravity waves.
In this study, we will highlight the use of Aeolus data to investigate the vertical change in gravity wave activity throughout the upper troposphere and lower stratosphere. This is of particular relevance for understanding the importance of oblique gravity wave propagation on a global scale. With its height resolved wind measurements, Aeolus provides a unique dataset for such investigations.
Additionally, we will show how the unique combination of wind measurements with optical properties provided by Aeolus can be used to gain a global picture of wave induced polar stratospheric cloud formation. Polar stratospheric clouds play a crucial role for the stratospheric ozone depletion above the poles. They form if the temperatures in the polar winter stratosphere fall below a certain temperature. The negative temperature perturbations of gravity waves can lead to the formation of polar stratospheric clouds even if the synoptic-scale temperature is still well above the formation threshold. Most of the current global chemistry-climate models do not yet contain this formation mechanism of polar stratospheric clouds through gravity waves. However, several modelling groups have already started to look into this issue. We will provide these modelling groups with a climatology and test data set of polar stratospheric clouds in the Artic/Antarctic with a special focus on wave induced polar stratospheric clouds. The Aeolus data set is predestined for creating such a climatology.
The European Space Agency's Aeolus satellite mission is designed to provide global information on the wind speed from the ground up to 30 km, which is highly demanded for weather forecasting. Aeolus satellite has been set into orbit in August 2018 and its payload consists of a sophisticated ALADIN lidar instrument measuring wind velocity by sensing Doppler spectral shift of the laser echo scattered by the different layers of the atmosphere.
Since the global atmospheric circulation is largely driven by middle atmosphere dynamics, it is essential that the climate models take a proper account for the dynamical processes. Small-scale atmospheric waves, called internal gravity waves (IGWs) pose a particular challenge for models, whereas inaccurate parameterization of IGWs can dramatically bias the predictions of future atmospheric circulation changes.
In this paper, we explore the capacities of Aeolus wind observations in capturing and resolving dynamical processes in the upper troposphere and lower stratosphere (UTLS) such as IGWs at various temporal and spatial scales. The perturbations in the vertical profiles of Rayleigh horizontal line-of-sight (HLOS) wind velocity, associated with IGW activity, are derived by subtracting the Aeolus-derived “background” wind profiles from the individual measurements. Then, the global distribution of the IGW kinetic energy in the UTLS and vertical wavelength is derived using Aeolus measurements over the entire mission lifespan. The derived evolution of IGW activity over the Aeolus mission lifetime is analyzed in consideration of the time-varying performance of ALADIN instrument. The latter is evaluated using two French ground-based Doppler wind lidars operating at a mid-latitude site (Observatoire de Haute-Provence) and at a southern tropical site (Maïdo Observatory at la Réunion island) as well as collocated meteorological radiosoundings.
The global spatiotemporal distribution of IGW from Aeolus observations is compared with that derived from global high-resolution temperature profiling data provided by GPS radio occultation (RO) GRAS (GPS Receiver for Atmospheric Sounding) instruments operating onboard MetOp satellites. The comparison of Aeolus and RO-derived global IGW distribution allows concluding on the capacities and limitations of Aeolus wind profiling for studying UTLS dynamics.
Along with global warming, marine plastic pollution has been described as one of the most pressing matters that our oceans will face in the coming decades. In this context, as part of the Decade for Ocean Science for sustainable Development (2021-2030), the Sustainable Development Goal 14.1 targets the prevention and significant reduction of all kinds of marine pollution by 2025, particularly from land based activities, including marine litter. However, accurate observations of the sources, composition and densities of floating marine litter in the world’s ocean are sparse and lacking. Remote sensing can play a significant role in the detection and monitoring of marine litter, and to this extent adequate in situ observations of calibration and validation data are essential.
Towards this goal, we have since 2018 launched a series of experimental field campaigns, the Plastic Litter Projects, that aim to enrich the scientific community’s understanding of floating marine litter’s (FML) spectral properties and behavior. By developing, constructing and deploying artificial floating targets containing various types of FML, we aim to produce a comprehensive remote sensing image database which can be used for the development, calibration and validation of FML detection algorithms. Throughout the years, we have used various types of marine litter items such as PET bottles and HDPE bags, as well as natural floating materials such as reeds, in the construction of artificial floating targets. Our efforts have been focusing mostly on the ESA-deployed, multispectral satellite Sentinel-2, mainly due to its frequent revisit intervals, relatively high spatial resolution and very importantly, the open access data provision.
In the past two years, we have shifted our focus away from the small-scale re-deployable artificial floating targets, and started moving towards semi-permanent target infrastructure. We have also focused on answering the scientific community’s call for replicability in terms of the experimental field campaigns, both in regards to the materials being used but also the experimental set-up. To that extent, in 2020 under the scope of the ESA funded OSIP project “Plastic Litter Project: Detection and monitoring of artificial plastic targets with satellite imagery and UAV”, we investigated the use of a single reference material that can act as a representative proxy to replace the use of different FML items in artificial floating targets. In addition, we developed and deployed two different types of prototype targets, in order to assess the possibility of long-term deployment.
Moving on to 2021 we constructed two large long-term deployment artificial floating targets that were deployed in the Gulf of Gera, in the Island of Lesvos in Greece. The first target consisted of a circular 28 m diameter HDPE pipe frame, on which we fastened a series of white HDPE mesh sheets, creating an effective target area of about 600 m2. The second target was made out of more than 350 3 m wooden planks, in order to represent natural floating marine debris, approximating the same effective target area. Deployment of such large surface area targets guaranties the acquisition of at least one Sentinel-2 10x10 m pixel that contains solely the target material. However, both the HDPE mesh and the wooden planks targets do not completely cover the water surface, hence part of the signal response is attributed to water leaving reflectance, resulting in a more realistic set-up.
The targets were deployed in the Gulf of Gera for a period of 4 months, from June to September 2021, resulting in a total of 22 cloud-free Sentinel-2 acquisitions. In addition to the Sentinel-2 data, we acquired very-high resolution UAV images for the production of orthophoto maps, hyperspectral UAV measurements in the range of 400 to 900 nm, in situ spectrometer measurements in the same spectral range, as well as ancillary metadata including wind speed, water turbidity using a Secchi disk and incident light intensity measurements. The experimental set-up and the large number of acquisitions allowed for the acquisition of data under different conditions and target configurations, providing the opportunity to examine the effect that a number of parameters, such as biofouling and degree of submersion have on the spectral response of the target materials. Combining the two targets for a number of Sentinel-2 overpasses allowed for a number of acquisitions with mixed FML and natural debris scenarios.
Biofouling accumulations can potentially affect the spectral signature of FML, both in the visible and infrared range. Deployment of the floating targets in productive waters such as those of the Gulf of Gera translates into the fact that the target materials are susceptible to bio-fouling. The long-term deployment of the large surface area HDPE target during the PLP2021 acquisition campaign, has allowed us to produce images of the target with and without biofouling accumulations, thus presenting the possibility to assess the effect of biofouling on FML reflectance. Preliminary analysis results show that biofouling mostly affects the intensity of the FML signal and specifically on the near-infrared range, without major effects to the signal shape.
Together with the above mentioned parameters, turbidity and wind velocity are also factors that can influence the water leaving reflectance and FML signal, affecting spectral classification methodologies. Turbidity is especially important for semi-enclosed basins with very productive and turbid waters, making it especially challenging in terms of spectral analysis. However, the somewhat varying turbidity of the Gulf of Gera waters presents the possibility to correlate detection accuracy with degree of turbidity. This can be especially useful in operational detection applications in river outflow situations in estuaries and river mouths, from where a high percentage of FML originally reaches the ocean.
Initial results suggest that spectral classification methodologies such as partial unmixing algorithms, can successfully be used for FML detection using the mean spectral signature acquired during the PLP2021 deployment period. Further analysis of the correlation between the above mentioned parameters and the resultant signal of FML can prove especially useful in operational scenarios of FML detection and monitoring.
Oceans receive solid waste from anthropogenic activities. A significant amount of the produced solid waste is made of plastics. The amount of plastic debris in the ocean and coastal areas is steadily increasing and is now a global major environmental problem. Accumulation of marine debris poses considerable threats to aquatic species, ecosystems and human beings too, as microplastics are eaten by fish and shellfish and consequently enter our food chain. At the global scale, the 2030 Agenda for Sustainable Development, adopted by the United Nations in 2015, calls for action to conserve and sustainably use the oceans, seas and marine resources with the Sustainable Development Goal (SDG) No. 14. Among the SDG 14 targets, the 14.1 calls to prevent and significantly reduce marine pollution of all kinds, in particular from land-based activities, including marine debris and nutrient pollution. From the European perspective, the Marine Strategy Framework Directive (MSFD) requires the EU Member States to ensure that "properties and quantities of marine litter do not cause harm to the coastal and marine environment".
For monitoring marine plastic litter, ground-based monitoring systems and/or field campaigns provide precise information on the quantity and quality of the marine litter. Nevertheless, ground-based monitoring campaigns for collecting data on marine litter are limited, time-consuming, expensive, require great organisational efforts, and provide very limited information in terms of spatial and temporal dynamics of marine debris.
Earth Observation by satellite has the potential to contribute significantly to marine plastic litter monitoring thanks to its global synoptic point of view. However, remote sensing of marine plastic litter is in its infancy, and it is a significant scientific and technological challenge.
In 2020, a consortium led by Planetek Italia (Italy), including the National Technical University of Athens (Greece) and the Environmental Prevention and Protection Agency of Puglia Region (Italy), participated in an ESA call for ideas, the Discovery Campaign on Remote Sensing of Plastic Marine Litter, with a novel idea for assessing the feasibility of marine plastic litter detection from space, which was selected.
The project was titled "Crowdsourcing, Copernicus and Hyperspectral Satellite Data for Marine Plastic Litter Detection, Quantification and Tracking" or REACT.
The REACT project focused on providing a proof-of-concept on remote sensing of marine plastic litter by developing the following methodology to detect plastic litter offshore and onshore. The methodology exploited data fusion of multispectral (MS) satellite data from Sentinel-2 and WorldView and hyperspectral (HS) satellite data from PRISMA, together with in situ data collection, and took advantage of two different approaches. The first was based on Spectral Signature Unmixing (SSU), and the second was based on Machine Learning (ML) methodologies.
The main objectives of the project were to:
• Assess how plastic litter can be detected and possibly quantified with current and future remote sensing technology;
• Develop adaptive indices insensible to biases induced by sunglint on satellite radiometric products and indicate the constraints of current satellite missions under various atmospheric and illumination conditions;
• Exploit data fusion methods between remote sensing data with high spectral (PRISMA hyperspectral, Sentinel-2) and high spatial resolution (PRISMA panchromatic, WorldView) to increase the sensor detectability of marine plastic litter;
• Explore SSU methodology for sub-pixel detection of floating marine plastic debris;
• Explore ML techniques for detecting plastic litter;
• Conduct controlled experiments under real conditions to better understand the effect of the atmosphere and the illumination conditions on the spectral properties of marine plastics in visible and infrared wavelengths.
The key target user of the REACT project was the Environmental Prevention and Protection Agency of Puglia Region, Italy (ARPA Puglia), which is in charge of detecting and monitoring marine plastic litter in the framework of the European legislation (i.e., MSFD).
The critical target users' needs can be summarised in reaching the capacity to:
• support field campaigns by environmental agencies to implement data collection plans by field operators;
• perform regular monitoring of marine litter over broad areas;
• provide spatial and temporal distribution of marine litter;
• forecast paths of floating litter;
• identify potential sources of plastic litter into the marine environment and forecasting possibly places of beached litter;
• achieve a cost-efficient, repeatable, and flexible methodology, from local to the national level.
During the project, some controlled experiments were performed to understand better the effect of the atmosphere and the illumination conditions on the spectral properties of marine plastics in visible and infrared wavelengths. Twelve floating plastic targets were constructed for the experiment. Their sizes were selected according to the spatial resolution of data expected to be achieved by image fusion. Targets were realised with four types/compositions of plastic materials with various colours: 1) low-density polyethylene (tarps in white, yellow and green colour), 2) polyethylene terephthalate (transparent water bottles, green oil bottles), 3) polystyrene (sheets for building insulation in cyan colour), and 4) all the above materials in equal surface extent. The 12 targets were placed offshore and onshore during satellite passages. In-situ measurements using a spectro-radiometer were also carried out during the controlled experiments. The controlled experiments were realised in Mytilini and Koplos Geras, on the Greek island of Lesvos, with the support of the remote sensing group of the University of Aegean.
The MS and HS satellite imagery collected during the controlled experiments were processed with image fusion techniques to obtain merged images with higher spatial resolution than the initial high spectral resolution images. On the fused images, SSU techniques and ML algorithms were used.
SSU contributes to the extraction of information at the sub-pixel level. Its main scope is to detect the distinct spectra in the fused MS and HS scenes, which can represent different materials, and to estimate their apparent quantification in a pixel in terms of a fraction. Endmembers correspond to different signals. In contrast, abundances refer to the fractions of these endmembers within a mixed pixel. The project achieved marine plastic litter detection by separating endmember spectra that best characterise plastic materials and waters. The abundance maps of these plastic material spectra led to plastic targets detection.
Plastic targets were also detected by supervised and unsupervised ML algorithms. The output was a probability map representing the probability that a pixel contains or not plastic.
The main findings of the project are summarised below:
• Plastic and water spectral signatures are significantly correlated in the original images and the fused HS data. The Principle Component Analysis (PCA) and the Gram-Schmidt Adaptive (GSA) algorithm gave better results in terms of spectral discrimination between water and plastics;
• Sunglint correction provided no beneficial in spectral discrimination between water and plastics;
• Image fusion based on matrix factorisation performed better than the deep learning methods. The Coupled Nonnegative Matrix Factorisation (CNMF) method produced better results when plastic targets were placed offshore. On the other hand, the Hyperspectral Superresolution (HySure) method presented slightly better results with targets placed onshore;
• SSU was able to detect plastic targets, although some manual tuning is necessary. Similar conclusions in the case of plastic spectral indexes were used;
• PS and LDPE were detectable, while PET was not;
• SSU on fused PRISMA data efficiently detected floating plastic accumulations up to 2.40x2.40m (about 1/2 of the panchromatic PRISMA band resolution);
• SSU on fused Sentinel-2 and WorldView data efficiently detected floating plastic accumulations of 0.60x0.60m (about 1/4 of WorldView resolution);
• ML algorithms provided promising results in plastic detection, although with a small dataset;
• Both SSU and ML require land and shallow waters masking;
• No significant results were highlighted with the targets onshore with both SSU and ML.
This work was supported in part by the Discovery Element of the European Space Agency's Basic Activities under ESA Contract 4000131235/20/NL/GLC (REACT Project: Crowdsourcing, Copernicus and Hyperspectral Satellite Data for Marine Plastic Litter Detection, Quantification and Tracking).
Over the past few years, spectral signature of marine litter has been studied both for virgin and weathered plastic samples and several spectral techniques based on indices have been developed to detect them. In this research, we present artificial intelligence (AI) results based on RGB, multispectral and short wave InfraRed (SWIR) for detection of floating marine litter using different sensors ranging from fixed camera, drone, airborne and high resolution satellite based multispectral data. All the data from the different cameras have been preprocessed considering the type of camera and the installation set-up. To reach this aim, several modifications have been made to the VITO’s cloud processing workflow for UAV data named MAPEO.
Part of the study has been done through making synthetic plastic accumulation zone in Belgium, while the other part was done in one of the hotspot source of marine litter in Hanoi, Vietnam which is part of an ESA project called “Artificial Intelligence and drones supporting the detection and mapping of floating aquatic plastic litter (AIDMAP)”. For fixed cameras, Detectron2 from Facebook AI research which provides state of the art detection and segmentation has been used for detecting individual floating litters. Faster Region based Convolutional Neural Network (Faster-RCNN) has been applied and 88.74 percent accuracy obtained at intersection of unit 50.
For detecting litter accumulation zones from multispectral UAV images, Decision Tree, Random Forest and Support Vector Machines (SVM) applied for classification of litter versus no litter using several widely used spectral indices, band ratios and normalized band ratios. Random forest classification showed better performance compared to the other algorithms.
Finally, first tests in an outdoor condition have been performed with a multispectral SWIR camera with six spectral bands that have been tuned to be sensitive at specific wavelengths. These wavelengths correspond to spectral features that are of particular interest to marine plastic based on the literature. The SWIR camera was installed as a fixed camera over a bridge in Belgium and its performance in detecting plastic objects was tested through several campaigns, here we present the detection results for the first time to the community.
Pristine and wilderness landscape of Patagonia is continoulsy threated by antropoghenic marine debris. Here, we presented a new approach to detect marine litter over beaches using UAV multiespectral imagery. The eBee UAV using the Parrot SQ ® were used to detect marine litter in two different beaches over the Patagonian Fjords located in the North part (Playa Muerta) and southern part close to Sn Rafael glaciers (Playa Leopardo). The Multispectral imagery were processed and calibrated in order to estimate the Surface reflectance. Then, support vector machine, Random forest and automatic digital classification were used to tratin and detect several litter such as styrofoam and buoys. Results shown 3 tons of litter per 16 km2 with an accuray about 6%. Color buoys are better classified than styrofoam which might be correlated with brightning pixel such as branches, shells and rocs. The combination of machine learning and very high spatial resolution imagery provided by UAV is validated to provide better information about marine litter in Patagonia mainly produced by the acquaculture systems.
In the past decade, plastic marine litter (PML) research has grown into a serious endeavour, exposing various aspects of plastic marine litter. For example, several studies have investigated and described the diversity and complexity of PML as well as the need to address not only floating or semi-floating plastic marine litter, but also plastic debris distributed in the first meters of the water column. Although remote sensing techniques could offer unique opportunities for addressing PML from local to global scale, it is agreed by the scientific community that a single sensor cannot provide all the required information since PML can vary considerably as for type, size, shape, chemical composition, buoyancy, and the way it manifests in the environment. While several optical passive remote sensing techniques (e.g. panchromatic cameras, multi-/hyperspectral sensors, thermal infrared cameras) have already been applied to the remote sensing of plastics either in controlled experiments or in real marine scenarios, the potential of LIDAR techniques to address marine litter issue is still almost unexplored.
The fluorescence LIDAR is an active remote sensing technique that can be exploited to acquire information on the chemical-physical characteristics of a volume target by using a pulsed laser. Fluorescence, actually, is an inelastic process due to the spontaneous emission of photons after the absorption of the incident radiation by the material system, given that such radiation belongs to an absorption line or band of the target components. The excited level decays after a characteristic time , called the fluorescence lifetime, in lower energy levels and in the fundamental level. The signal acquired by the LIDAR system can be analysed either in the time domain, e.g. by using a streak camera, in order to perform lifetime spectroscopy, or in the frequency domain, e.g. by using a spectrometer coupled to a linear array detector, in order to perform fluorescence spectroscopy. In both cases, the fluorescence LIDAR system can provide information on the chemical-physical properties of the target.
In this paper, we present a set of experimental tests, carried out in the laboratory, by using an in-house developed hyperspectral fluorescence LIDAR system. Different types of raw plastics and ocean-harvested objects have been investigated as for their fluorescence properties. The LIDAR sensor was operated from a distance of about 10 m while the samples were measured both in dry conditions and while immersed in a simulated water column, down to a depth of about 1 m. Besides the results of the experiments for the detection and characterisation of plastic marine litter, several factors that can affect the detection and characterisation of plastic marine litter are discussed: these includes photobleaching effects on plastics, fluorescence of dissolved organic matter and Raman scattering due to the water molecules.
This study was funded by the Discovery Element of the ESA's Basic Activities, contract no. 4000132184/20/NL/GLC.
Since the 1950’s, positively buoyant plastic objects have been accumulating at the surface of the oceans, transported by currents, wind and waves. Small millimeter-sized pieces (less than 4.75 mm), known as microplastics, count in trillions at global scale and pose an increasing risk to marine biota. Floating microplastics concentrate along large-scale convergence zones associated with Ekman dynamics in the five major ocean basins, but a comprehensive analysis of the spatial and temporal distributions is lacking and the monitoring tools are not well developed to assess global distributions. Through our recently funded NASA project Spaceborne Quantification of Ocean MicrO-Plastics (SQOOP), we are conducting a feasibility study of remote detection of surface microplastics, in the context of different surface particle properties and uncertainties in atmospheric correction with multiple, advanced detection techniques including hyperspectral and polarimetric approaches.
Here we will present preliminary results evaluating: 1) Geospatial and temporal trends in existing ocean color products across hot spots that may be related to enhanced reflectance from plastics; 2) Simulations of ocean surface hyperspectral reflectance using simple mixed pixel models to the Top of the Atmosphere (TOA) under different microplastic concentrations and atmospheric conditions; and 3) Simulations of microplastics using robust vector radiative transfer models for coupled ocean-atmosphere systems that include polarization. Detectability of floating marine microplastics will be conducted using statistical information content assessment in terms of current and future instrument characteristics, microplastic quantity and nature, and external conditions, such as observation geometry and atmospheric state. The sensitivity analysis will compare simulated and ocean color data to historic data of microplastic concentrations measured in surface net tows across the world ocean. The influence of floating microplastics on atmospheric correction and on standard ocean color retrievals of aerosols and of other ocean properties will also be explored.
In collaboration with visual artist Oskar Landi, we are producing an unconventional and engaging art exhibit to engage the public further in this pressing environmental problem. This unique art and science collaboration also provides new insights and informs our scientific models of sea surface dynamics including sun glint, sea foam, and floating plastics.
In preparation of the FLEX satellite mission we have conducted a series of field campaigns, during which we aimed to (i) correct real world data on SIF to (i) develop a complete FLEX-like reference data set of field data that can be used to develop and test FLEX satellite data processing concepts and data products, (ii) quantitatively understand the dynamics within the SIF signal and their quantitative link to structural and functional vegetation traits to support the development of vegetation stress indicators, and (iii) to evaluate and test the components required for a FLEX Cal/Val concept.
We have conducted two large FLEX campaign activities, namely the AtmoFLEX and FlexSense campaigns, in the years 2017, 2018 and 2019, which aimed to collect complete optical and SIF data from various ecosystems across five European countries. Additionally, we have finalized the PhotoProxy activity (funded by EO4Science), during which we included further field data from the US cornbelt (cooperation with colleagues from the University of Nebraska, USA) aiming to improve our knowledge on the option to include SIF in GPP predictions. Finally, we have completed the AtmoFLEX campaigns, which focussed on the acquisition of atmospheric data and the development and testing of ground-based SIF references systems (FloX system). These ‘FLEX driven’ campaign activities were integrated with other synergistic campaign activities, such as SARSense, SurfSense, or Chime (Vila-Guerau de Arellan et al. 2020, Mengen et al. 2021). With this presentation we will give an overview on the main outcomes and the conclusions, which we could draw from these challenging campaign activities.
For a correct retrieval of the relatively weak solar-induced fluorescence signal from a satellite platform, a stringent atmospheric correction is essential. This challenge was already identified during the selection of the FLEX satellite mission, and the tandem concept between FLEX and Sentinel-3 is a direct answer to this challenge. During above-outlined campaigns, we could close a crucial gap by collecting a complete data set on synchronously recorded detailed atmospheric data, ground based and airborne radiation and SIF retrievals, as well as top-of-atmosphere satellite data, acquired from a tandem constellation between Sentinel-3A and Sentinel-3B during the commissioning phase. At five times slots, we were successfully underflying the Sentinel tandem constellation with the airborne FLEX-like sensor HyPlant, while flying over dedicated atmospheric measurement stations. This dataset is complemented by 14-month long time series of FloX measurements, more than 700 flight lines from the high-performance imaging spectrometer HyPlant, and various associated measurements of bio-physical plant traits, carbon and water flux measurements and dedicated functional measurements for monitoring the effects of environmental stresses on plant health. Using this large reference dataset, which is currently used for the development and testing of the FLEX satellite and FLEX data processing scheme and which delivered quantitative sensitivity parameters on the impact of atmospheric characterization for SIF retrieval, we could:
- show that solar-induced fluorescence is sensitive to early signs of vegetation drought and shows significant changes of its far-red peak already 3 days after the onset of drought, while reflectance indication was only sensitive after 7 days, once drought effects already left visible marks (Damm et al., submitted),
- detect the effect of summer heat waves, which significantly reduced photosynthetic carbon uptake, as clear changes in the solar-induced fluorescence signal predicted by previous model assumptions (Martini et al. 2021),
- establish a data downscaling approach, which can be used to bring top-of-canopy SIF measurements closer to leaf-level SIF, which is the relevant input parameter for many mechanistic vegetation flux models (Siegmann et al. 2021),
- deliver an overview of relative uncertainty and bias estimates for FloX time series and derive requirements towards a calibration and validation network for SIF satellite missions (e.g. FLEX, Buman et al., submitted).
Thus, executing this ambitious and integrated campaign concept, we could (i) lay the basis for the development of the FLEX Cal/Val scheme, (ii) confirm some hypotheses on SIF being a sensitive drought and heat stress indicator, and (iii) greatly extend the data basis on the natural variability of the SIF signal across different ecosystems, the diurnal and seasonal cycle, and as a reaction to environmental extremes. The campaign data has been delivered to ESA and is currently being made freely available via the ESA campaign webpage and data portals.
Selected publications that emerged from these campaign activities:
Siegmann B., Cendrero-MateoM.P., Cogliati S., Damm A., Gamon J., Herrera D., Jedmowski C., Junker-Frohn L.V., Kraska T., Muller O., Rademske P., van der Tol C., Quiros-Vargas J., Yang P. & Rascher U. (2021) Downscaling of far-red solar-induced chlorophyll fluorescence of different crops from canopy to leaf level using a diurnal data set acquired by the airborne imaging spectrometer HyPlant. Remote Sensing of Environment, 264, article no. 112609, doi: 10.1016/j.rse.2021.112609.
Martini D., Sakowska K., Wohlfahrt G., Pacheco-Labrador J., van der Tol C., Porcar-Castell A., Magney T.S., Carrara A., Colombo R., El-Madanay T., González-Cascón R., Martin M.P., Julitta T., Moreno G., Rascher U., Reichstein M., Rossini M. & Migliavacca M. Heat-wave breaks down the linearity between sun-induced fluorescence and gross primary production. New Phytologist, accepted.
Vila-Guerau de Arellan J., Ney P., Hartogensis O., de Boer H, van Diepen K., Emin D., de Groot G., Klosterhalfen A., Langensiepen M., Matveeva M., Miranda G., Moene A., Rascher U., Röckmann T., Adnew G., Brüggemann N., Rothfuss Y. & Graf A. (2020) CloudRoots: integration of advanced instrumental techniques and process modelling of sub-hourly and sub-kilometre land-atmosphere interactions. Biogeosciences, 17, 4375-4404, doi: 10.5194/bg-17-4375-2020.
Mengen D., Montzka C., Jagdhuber T., Fluhrer A., Brogi C., Baum S., Schüttemeyer D., Bayat B., Bogena H., Coccia A., Masalias G., Trinkel V., Jakobi J., Jonard F., Ma Y., Mattia F., Palmisano D., Rascher U., Satalino G., Schumacher M., Koyama C., Schmidt M., Vereeken H. (2021) The SARSense campaign: air- and space-borne C- and L-band SAR for the analysis of soil and plant parameters in agriculture. Remote Sensing, 13, article no. 825, doi: 10.3390/rs13040825.
The FLuorescence EXplorer (FLEX) will be the first mission designed to monitor the photosynthetic activity of the terrestrial vegetation through the retrieval of the solar-induced chlorophyll fluorescence and the true vegetation reflectance (Level-2 products). To provide reliable estimates on the photosynthesis efficiency on large spatial and temporal scales, the intermediate Level-2 products demand specific uncertainty determination. The validation of the Level-2 products is based on the comparison between the retrieved FLEX products and the ground truth measurements. However, similar to the satellite products, ground truth measurements have also associated uncertainties and variances mainly related to 1) instrument performance and calibration, 2) retrieval error, 3) and site dependent spatial/temporal variability, which need to be characterized to perform a fair comparison.
In this context in July 2020 a ground-based and airborne campaign was carried out to evaluate the calibration/validation (Cal/Val) protocols developed between the Spanish National Institute of Aerospace Technology (INTA) and the University of Valencia for the future FLEX mission in the experimental agricultural site of Las Tiesas, Barrax, Spain. During this field campaign various surface types were investigated including heterogeneous crop fields such as melon, pepper, alfalfa, onion, and corn as well as homogenous surface types such as festuca and bare soil fields. In each surfaces type five elementary sampling units (ESUs) were assigned, and during the airborne overpass simultaneously top of canopy and leaf level measurements were performed at each ESUs. The airborne system was equipped with an Airborne Hyperspectral Scanner (AHS, Sensytech), Compact Airborne Spectrographic Imager (CASI 1500i, ITRES), and a high-resolution Chlorophyll Fluorescence sensor (CFL, Headwall). Top of canopy measurements were performed with ASD FieldSpec3 (Malvern Panalytical) and at leaf scale, the FluoWat leaf clip was used, coupled with an ASD FieldSpec3, to characterize the optical properties and fluorescence emission on the different ESU. Furthermore, with the objective of characterizing the real reflectance and solar-induced fluorescence spatial and temporal variability in a continuous and not supervised mode, a CableCam system was tested for continuous measurements over the festuca and melon field. The CableCam is a custom system consisting in a semi-autonomous trolley traveling along a 70 meters zip line, equipped with two high resolution spectroradiometers (one covering the VIS/NIR spectrum range and the other one covering the spectral range of the O2 A and O2 B atmospheric absorption bands – Piccolo system), a multispectral camera (MAIA-S2, SAL Engineering), and a thermal camera (ThermalCapture Fusion, TEAX). The CableCam was set 4 meters above the canopy. Moreover, continuous measurements in the festuca field were collected by a FloX system (Hyperspectral Devices) placed in a two-meter tower. Finally, a sun tracker attached to a second Piccolo system was used to monitor the direct and global solar irradiance during the campaign.
In this study, first the uncertainty of each measurements system was estimated by comparing the measured irradiance with the modelled irradiance generated using the radiative transfer model libRadtran. Secondly, the instrument uncertainty was propagated to estimate the true reflectance and fluorescence retrieval error. Additionally, the retrieved products were compared with the FluoWat leaf level measurements. Thirdly, considering the absolute uncertainty (instrumental and retrieval error) the spatial and temporal variability of the cited Level-2 products was estimated for each studied site. Finally, the airborne imagery absolute uncertainty and the spectral variability among pixels was used to estimate the number of sampling units needed to capture the spatial heterogeneity of a 300x300 meters FLEX pixel. The strategies tested in this study aim to set the roadmap for the Cal/Val protocols of the future FLuorescence EXplorer mission.
The validation of SIF products that will be provided by the FLEX mission is a challenging issue and has been and will be the focus of both past and future ESA campaigns and initiatives. In brief, a good validation strategy implies i) that a reliable mean fluorescence signal (full spectrum) is provided for an area of at least 3x3 pixels of FLEX and ii) the uncertainty associated to the SIF estimation does not exceed the error requirement recommended for FLEX measurements.
Past experiences acquired over a series of field campaigns and modelling studies, has indicated that the first requisite can be potentially satisfied either by using different solutions. For example by exploiting mobile platforms flying at relatively low elevation and that can rapidly cover (or sampling) the entire area required for FLEX validation, or by integrating a sufficient number of ground-based fixed or mobile measurements over that same area. Here, we will show comparison of measurements at different scales, errors, caveats and potential improvements. The second requisite poses an additional challenge, considering the rather incompressible uncertainty associated with the use of airborne or ground-based spectrometers and the obvious dependency of such uncertainty on measurement height and optical features of the near-surface atmosphere.
We present different validation approaches, sites and instruments requirements and the overall sampling approach for performing the validation of the fluorescence metrics throughout the mission lifetime of FLEX. This presentation will summarize all those aspects, by bringing together theoretical considerations as well as a large amount of data which have been acquired over the last 5 years using ground/tower based, drone-based and airborne-based fluorescence sensors. However, some points still need to be better investigated and our contribution will also highlight new directions and additional activities to be undertaken.
In the last decade, intensifying efforts on field campaigns to measure solar-induced fluorescence (SIF) at different scales have been made by distinct Universities, Research Institutions, and within the framework of space mission activities like the FLuorescence EXplorer (FLEX), 8th Earth Explorer from the European Space Agency (ESA). In this context, the SIF research community has gathered increasing interest in canopy-SIF monitoring at the tower and Unmanned Aerial System (UAS) scale both to continue advancing vegetation-orientated scientific studies and to provide accurate validation reference datasets for current and future fluorescence-related satellite-derived products.
Focusing on satellite validation purposes, the estimation of reliable canopy-level SIF spectra to be used as 'ground truth' is not straightforward. Enlarging the footprint size is usually achieved by placing the measurement instrument at a certain distance from the canopy level, e.g., by using sensors mounted on towers or tilting the observation angle. However, the larger the optical path between the target and the measurement instrument, the stronger the atmospheric impact, being oxygen absorption effects the most critical ones to be corrected.
Oxygen absorption features, particularly the oxygen-A band around 760 nm, have been traditionally used in passive remote sensing experiments at the canopy level to measure the chlorophyll fluorescence signal emitted by plants. Absorption features are advantageous spectral regions to measure SIF since the difference between canopy-reflected solar irradiance and emitted fluorescence is reduced; therefore, increasing the sensitivity to detect SIF. However, monitoring SIF at discrete wavelengths could not be sufficient in the context of the FLEX mission where satellite-derived spectrally-resolved SIF is planned to be estimated.
In this work, we propose a 3-step physically-based processing chain mainly designed for fluorescence retrievals at the tower level. The proposed processing accounts for (1) the oxygen absorption estimation through the own-developed toolbox, the O2_TRANS, given the environmental and observation conditions, (2) the oxygen correction and calibration procedure of the acquired spectra, and (3) the application of a SIF retrieval strategy.
In the proposed approach, only main parameters regarding the illumination and acquisition geometry are required to estimate oxygen absorption effects at a specific tower setup, i.e., solar zenith angle, downward-looking observation zenith angle, and target-sensor distance. Also, meteorological parameters of air temperature and pressure (T,p) can be accounted for in the developed O2_TRANS software package for a more detailed oxygen transmittance modeling. The O2_TRANS toolbox is based on the HITRAN database, https://hitran.org/, producing line-by-line oxygen transmittance modeling for the O16O16, O16O17, O16O18 isotopologues. The developed toolbox (publicly available through a git repository) can be easily configured for a specific tower setup. The O2_TRANS provides the set of oxygen transmittance spectra needed to correct the pairs of quasi-simultaneous downwelling and upwelling radiance spectra typically measured to estimate SIF.
In a second step, the set of acquired upward- and downward-looking radiance spectra are oxygen-corrected and calibrated. Here, the word calibration refers to the application of a calibration spectral factor required to compensate for errors associated with the algebra manipulation at the sensor resolution during the oxygen absorption correction. This calibration spectral factor has been empirically formulated to be merely dependent on the O2_TRANS outputs to avoid any more advanced approximation error approach involving model training dependencies.
Finally, in a third step, the preferred SIF retrieval strategy can be applied, i.e., either a differential absorption technique or(and) a spectral fitting method.
This work illustrates with simulated and real datasets a physically-based approach that can be used to correct measurements for oxygen atmospheric effects at the proximal sensing level and is flexible enough to be coupled with any SIF retrieval technique.
Earth observations via satellite are becoming more and more important for all environmental studies and to monitor political restrictions. Most satellite products are based on level 1 top of atmosphere radiance measurements. However, the quality of those data is difficult to estimate. Instruments can be characterized and calibrated on the ground but the changes during the launch and ageing must be characterized in space. Uncertainty estimates can be done comparing with ground truth stations. To apply that comparison the atmosphere must be characterized completely as level 1 data are dependent on interaction within the atmosphere.
A different method for the uncertainty estimate is the comparison of two satellites with similar instrumentation. If the satellites have similar viewing geometries of the same geographic targets within a short time frame, no atmospheric correction is needed. Different instrument characteristics such as the central wavelength and the width of a channel must be considered .
The Sentinel 3 tandem mission was a unique opportunity to compare the same instruments on two different satellite as Sentinel 3A and 3B were flying in the same orbit with a time shift of only 30 s [Clerc et al. 2020, Lamquin et al. 2020, Hunt et al. 2020]. During the tandem phase, OLCI-B was reprogrammed to mimic the future Earth Explorer 8 FLEX for a number of scenes over Europe. The reprogrammed OLCI (OLCI-FLEX) measured in 45 bands within the visible spectral range with high resolution in the oxygen absorption bands.
This data set is used to establish and test a validation strategy for the future FLEX campaign. One step of this validation strategy is the comparison of OLCI and FLEX level 1 radiances. It is based on a transfer function that modifies radiance measurements at OLCI-FLEX bands towards OLCI-A bands to make the measurements comparable. The transfer function requires possible characterization of the atmosphere and the surface. It is done applying an optimal estimation algorithm on the high resolution OLCI-FLEX measurements. The second step of the transfer function is a simulation of OLCI-A measurements based on the found environmental characteristics using a forward model. The optimal estimation algorithm and the forward model are based on look up tables calculated with the radiative transfer model MOMO [Hollstein et al. 2012].
The application of the transfer function for the OLCI-FLEX data showed a difference between OLCI-FLEX and OLCI-A of about 2 % which agrees well with the findings of Lamquin et al. 2020. Furthermore, possible processing issues could be identified for large wavelengths. The results show that the method of the transfer function is sensitive for both instrument differences as well as processing uncertainties. Thus, a good quality control is possible.
Clerc, Sébastien, Craig Donlon, Franck Borde, Nicolas Lamquin, Samuel E. Hunt, Dave Smith, Malcolm McMillan, Jonathan Mittaz, Emma Woolliams, and Matthew Hammond. ‘Benefits and Lessons Learned from the Sentinel-3 Tandem Phase’. Remote Sensing 12, no. 17 (2020): 2668.
Lamquin, Nicolas, Sébastien Clerc, Ludovic Bourg, and Craig Donlon. ‘OLCI A/B Tandem Phase Analysis, Part 1: Level 1 Homogenisation and Harmonisation’. Remote Sensing 12, no. 11 (3 June 2020): 1804. https://doi.org/10.3390/rs12111804.
Hunt, Samuel E., Jonathan PD Mittaz, David Smith, Edward Polehampton, Rose Yemelyanova, Emma R. Woolliams, and Craig Donlon. ‘Comparison of the Sentinel-3A and B SLSTR Tandem Phase Data Using Metrological Principles’. Remote Sensing 12, no. 18 (2020): 2893.
Hollstein, André, and Jürgen Fischer. ‘Radiative Transfer Solutions for Coupled Atmosphere Ocean Systems Using the Matrix Operator Technique’. Journal of Quantitative Spectroscopy and Radiative Transfer 113, no. 7 (1 May 2012): 536–48. https://doi.org/10.1016/j.jqsrt.2012.01.010.
With an ever-changing environment the need for accurate, timely and high-resolution information on land use/land cover and its changes has increased tremendously over the past years. Until now however, regional or continental land cover maps were solely based on high resolution optical earth observation data such as Sentinel-2 or Landsat while the use of SAR data such as Sentinel-1 in the production of large area land cover maps is still in its infancy.
For this purpose, the European Space Agency (ESA) initiated the WorldCover project which has released in October 2021 a freely accessible global land cover product at 10 m resolution for 2020, based on both Sentinel-1 and Sentinel-2 data. WorldCover contains 11 classes and has been independently validated with an overall accuracy of 74.4%.
A crucial aspect for WorldCover was the involvement of several end users such as WRI, UNCCD, FAO, CIFOR & OECD active in different domains who provided primary input for all engineering aspects and followed the whole project workflow from design up to validation and uptake. Consequently, WorldCover intends to provide a substantial benefit to various user communities and expands the established global land cover base of users and the development of novel services.
In this presentation we will present you the WorldCover product, illustrate the complementary power of Sentinel-1 and 2 for global land cover mapping, discuss tradeoffs made and lessons learned in the production of the WorldCover product, zoom in on the user feedback on the product and show how the product can be improved towards the future.
Decision making at regional, national and international scales can be greatly improved with the availability of regular, consistent, and reliable maps of the land cover and how it changes over time and space. With modern improvements in data accessibility and the advancement of computational resources, operationalizing the production of these products at a large scale is now achievable. The next challenge comes with building systems which are not only just meeting today’s needs, but also have the ability to easily incorporate future anticipated improvements.
Geoscience Australia’s Digital Earth Australia (DEA) in collaboration with Aberystwyth University (Wales, UK) and Plymouth Marine Laboratory (PML) have built a globally applicable method for generating consistent, large-scale land cover maps from satellite imagery. The approach builds on the Earth Observation Data for Ecosystem Monitoring (EODESM) system (Lucas and Mitchell, 2017), which constructs and describes land cover classifications based on environmental descriptors derived from Earth observation (EO) data. The system’s land cover structure is based on the globally applicable United Nations Food and Agriculture Organisation Land Cover Classification System (UN FAO LCCS) taxonomy). This new land cover classification system has been built to be adaptable and modular, allowing for its application to a range of different landscapes, at different spatial and temporal resolutions and with a variety of data sources. One major feature of this new system is the inclusion of native support for the Open Data Cube environment. The system is open source, meaning that the algorithms and code are openly available to researchers anywhere in the world. The structure of the code enables researchers to utilise their own, regionally specific methods to build land cover products tailored to their own needs, while still adhering to the overall UN FAO LCCS framework. It can be run using data from a range of sensors including multispectral optical, radar, LIDAR, as well as custom georeferenced datasets.
This method is currently being used operationally by both DEA and Aberystwyth University to create national land cover products. For Australia, DEA has generated DEA Land Cover, a high resolution (25 m) continental, annual land cover map for each year from 1988 to 2020 by utilising over 30 years of Landsat sensor data. This data is being utilised in Australia’s environmental economic accounting, and is providing valuable insights to researcher and decision-makers. Similarly, Aberystwyth University has worked with DEA and Welsh Government through the Living Wales project to generate national land cover maps for Wales over for four years (https://wales.livingearth.online/) using multiple sensors including Sentinel 1 and Sentinel 2 and is currently extending the time-series back to the mid 1980s using Landsat sensor data (Lucas et al., 2018). In addition, several research bodies across the globe have begun a community of practice to share ideas, algorithms and support each other in implementing this land cover methodology in their own Open Data Cube environments.
References:
Lucas, R.M.; Mitchell, A. Integrated Land Cover and Change Classifications. In The Roles of Remote Sensing in Nature Conservation: A Practical Guide and Case Studies; Díaz-Delgado, Lucas, R., Hurford, C., Eds.; Springer: Cham, Switzerland, 2017; pp. 295–308.
Lucas, R., Bunting, P., Horton, C., 2018. Living WALES — National Level Mapping and Monitoring Though Earth Observations, Ground Data and Models. IGARSS 2018 – 2018 IEEE International Geoscience and Remote Sensing Symposium. Presented at the IGARSS 2018 – 2018 IEEE International Geoscience and Remote Sensing Symposium, pp. 6608–6610. https://doi.org/10.1109/IGARSS.2018.8519452
Owers, CJ, Lucas, RM, Clewley, D, Planque, C, Punalekar, S, Tissott, B, Chua, SMT, Bunting, P, Mueller, N & Metternicht, G 2021, 'Living Earth: Implementing national standardised land cover classification systems for Earth Observation in support of sustainable development', Big Earth Data. https://doi.org/10.1080/20964471.2021.1948179
High quality training data is crucially important for any land cover/ land use mapping. There are different sources of training data available, including on-ground observations and visually interpreted very high resolution images. Additionally, while creating new maps, already existing land cover/land use maps are being used to extract more training samples. Here, we have explored impact of using various sources of training data on quality of the World Cover land cover map at 10m resolution. The key input used for developing the 2020 World Cover map at 10m resolution was the 2015 CGLOPS data set collected through the Geo-Wiki engagement platform. A specific branch of Geo-Wiki (http://geo-wiki.org/) was developed for collecting reference data at the required resolution and grid (PROBA-V UTM 100 m pixels). It showed the pixels to be interpreted on top of Google Earth and Microsoft Bing imagery, where each pixel was further subdivided into 100 sub-pixels of 10 m x 10 m each, in line with the Sentinel 2 grid. Using visual interpretation of the underlying very high resolution imagery, experts (a group of people trained by International Institute for Applied Systems Analysis staff) interpreted each sub-pixel based on the land cover type visible, which includes trees, shrubs, grassland, water objects, arable land, burnt areas, etc. This information could then be translated into different legends using the UN LCCS (United Nations Land Cover Classification System) as a basis. While this data set was a very good input for mapping land cover at 100m resolution, it was a bit noisy at 10m pixel level due to the shifts of underlying images used for visual interpretation as well as due to land cover changes happened between 2015 and 2020. This was critical for mapping highly heterogeneous areas, such as savannas, with mixed woody vegetation, grasslands, and bare land. We’ve worked on optimizing the training data set and we would like to present our approach to improving the training data quality at 10m resolution for mapping heterogeneous areas, and also present our iteration approach for improving maps quality by using spatial accuracy. Finally, we would like to discuss the requirements for training data collection with a particular focus on highly heterogeneous landscapes.
Semantic segmentation with convolutional neural networks (CNN) has proven an effective method for accurate land use/land cover classification with high-resolution satellite imagery in recent years (Carranza-Garcia et al., 2019, Scott et al., 2017). However, CNNs need full coverage of training labels over input features. Generating such training label data is time-consuming and expensive, which has limited this technique’s applicability for continental-scale mapping efforts.
The LUISA Base Map 2018 (Pigaiani and Batista E Silva, 2021) provides a Europe-covering land use/land cover (LULC) database for the year 2018, distinguishing 46 LULC classes in a hierarchical legend at 50m resolution. The LUISA dataset is the result of combining several human-annotated LULC datasets of Europe for 2018, and therefore presents a unique opportunity to train a CNN to classify LULC for the entirety of continental Europe, potentially in other years than the source year of its training data.
We explored the potential of the LUISA Base Map for deep learning-based LULC classification at multiple levels of spatial and thematic resolution. To do this, we trained UNET-based CNNs on 30m Landsat, 10m Sentinel-2, and 3m Planet satellite imagery of multiple regions in Europe, recorded in 2018. We did this for each of the four levels in the LUISA legend hierarchy (5, 14, 41, and 46 classes), resulting in 12 models. After training, each model was used to classify LULC in a left-out set of European sample locations in 2018 and 2015.
Our accuracy assessment consists of a validation on LUISA data and satellite imagery from 2018 for a number of left-out regions, as well as a cross-reference to all observations from the Land use and land cover survey (LUCAS) of both 2018 and 2015. We also calculated the agreement between the LUCAS observations of 2018 and the LUISA Base Map itself to provide an objective estimate of the maximum attainable accuracy of this validation method.
Our objective was to train a model that can reproduce the LUISA basemap on left-out data, and achieve a similar score on LUCAS validation points from 2015 and 2018. Initial results suggest that the unprecedented spatial and thematic resolution of the LUISA basemap can be reproduced for other years without requiring the costly efforts of creating and combining its component datasets for each target year.
References
Carranza-García, M.; García-Gutiérrez, J.; Riquelme, J.C. A Framework for Evaluating Land Use and Land Cover Classification Using Convolutional Neural Networks. Remote Sens. 2019, 11, 274. https://doi.org/10.3390/rs11030274
Pigaiani, C. and Batista E Silva, F., The LUISA Base Map 2018, EUR 30663 EN, Publications Office of the European Union, Luxembourg, 2021, ISBN 978-92-76-34207-6 (online), doi:10.2760/503006 (online), JRC124621.
Scott, G. J. and England, M. R. and Starms, W. A. and Marcum, R. A. and Davis, C. H.,"Training Deep Convolutional Neural Networks for Land–Cover Classification of High-Resolution Imagery, in IEEE Geoscience and Remote Sensing Letters, vol. 14, no. 4, pp. 549-553, April 2017, doi: 10.1109/LGRS.2017.2657778.
About half of Germany's total land area is used for agricultural production, and about one third is forested. Gathering detailed information on the land cover of these landscapes is of great importance for their ecological and economic valuation. This could improve, for example, the estimation of ecosystem services such as pollination, the quantification of nitrate and nutrient inputs into water bodies and the determination of forest conditions in times of climate change. Forests in particular play a central role in environmental impact assessments for large infrastructure projects. However, spatially explicit information on on tree species and the conservation value of forest types is missing.
High temporal and spatial resolution satellite data of the Copernicus mission allow continuous monitoring of plant dynamics on the land surface. Optical remote sensing is suitable for capturing the spectral characteristics of plant species. Using time series analysis, plant species can be distinguished based on their different phenology. However, approaches are needed to deal with cloud contaminated data and to take into account regional biogeographical conditions when classifying plant species at national level.
In this context, we present the classification approach APiC that has been used to produce maps of agricultural crops and tree species. APiC is a highly automated, data-driven machine learning approach to land cover classification that works dynamically at pixel level. The thematic dimension (definition of the land cover classes) of the training data is determined solely by the algorithm, as is its temporal dimension (time periods for which sufficient cloud-free pixel observations are available). For land cover prediction, a large number of classification models are computed to take the individual cloud cover at pixel level into account. In this way, model performance can be specified for each pixel to provide a more detailed insight into the accuracy of the classification. It follows that with APiC neither cloud-free image mosaics are created nor are artificial reflectance values generated for gap filling.
First, APiC was used to classify crops across Germany based on Sentinel-2 data from 2016. LPIS data served as reference data for training the machine learning algorithm ‘random forest’ and for validating the results. The different growing conditions in Germany were taken into account by carrying out the classification independently in six landscape regions. With an overall accuracy of 88%, a total of 19 crop types were classified, namely winter wheat, spelt, winter rye, winter barley, spring wheat, spring barley, spring oat, maize, legumes, rapeseed, leeks, potatoes, sugar beets, strawberries, stone fruits, vines, hops, asparagus and grassland (www.ufz.de/land-cover-classification). Agricultural crops for subsequent years (2017-2020) are currently being mapped using the same routine. Based on time series analyses, the effect of crop rotation and landscape configuration on pollination performance of bees can be better estimated.
Secondly, the main tree species in Germany were classified using forest inventory data and Sentinel-2 data from 2015-2017. Pine, larch, spruce, Douglas fir, oak, beech, hornbeam, alder and willow were classified with an overall accuracy of 76,6 % across three landscape regions. Due to the heterogeneity of forests and the design of the reference data/inventory data collection, the classification of tree species turned out to be more challenging than that of crop plants. We used the tree species classification map together with information on the potential natural vegetation of Germany, the Red List status of forest types and the canopy height (derived from high-resolution LiDAR data) to determine a conservation value of forested areas. Provided by the Federal Agency for Nature Conservation, the potential natural vegetation represents the vegetation of Germany as it would exist under current climate and soil conditions without human influence. This information was essential to assess the tree species classification map from a nature conservation perspective. Our approach does not take into account other important nature conservation aspects such as deadwood occurrence and undergrowth of the forest. However, through the use of remote sensing, a preliminary conservation assessment of forests could be made at the national level.
The climate is changing faster than planned, emergencies and crisis are in the news everyday. In parallel, population is growing, exchanges are across the world, the economy needs to develop. In this context, we need to keep the balance to protect our planet as a resource and a patrimony. Sustainable development and green transition is at the heart of this nexus. And The Green Deal is the Commission action plan in this long term endeavour.
With Copernicus, we have been collectively visionary decades ago through its public services to monitor the planet and support green economic development. However, we need to go beyond as the situation is evolving, because policies need to be science and evidence based more and more and we need to act quickly. Therefore we have the opportunity to benefit for transformative changes within the context of the dual digital and green transition.
Destination Earth (DestinE) is a Commission flagship initiative aiming to develop gradually over the next 10 years, a highly accurate digital model of the Earth (a digital twin of the Earth) to monitor, simulate and predict natural and human activity, and to develop and test scenarios for more sustainable development and for achieving both the green (Green Deal) and digital (Digital Strategy) priorities of the EU.
Leveraging on EU’s substantial investments and activities in high-performance computing (HPC), Artificial Intelligence (AI), cloud computing, high-speed connectivity networks, data from multiple sources (space, in-situ, socio-economic data) and by bringing together European scientific and industrial excellence to achieve the objectives of the initiative. DestinationEarth will be as well a new way to interact with users and consider individual usages through user-defined scenarios-building including impact sectors, and considering new forms of cooperation and co-design for best practices.
DestinationEarth has to be understood first as an operational and trustable portfolio of digital services, applications and tools to create content, to support decision-making including in extreme situations, anticipate environmental disasters and resultant socio-economic crises to save lives and avoid large economic downturns.
The presentation will illustrate these concepts and how DestinE will build on the capacities provided among others by a digital Copernicus and act in synergy. The implementation of the two programmes will benefit from powerful synergies while at the same time, both initiatives will maintain their focus on their respective, distinct, scope of work and related activities.
The rapid growth of Earth Observation (EO) data, especially in the last decade, combined with the ever-increasing development of analytical theories and tools, has generated a wide range of practical applications covering land, maritime, climate change and atmospheric monitoring. The comprehensive and systematic structuring and processing of this data and their transition to valuable knowledge has helped us model and predict natural processes, improve Atmospheric, Marine and Land monitoring as well as understand the complex dynamics of our planet’s environment.
Current EU data services and repositories offer EO data of staggering volume and diversity. However, data access and use are yet to fully spread beyond EO experts and scientists to the wider industry. EO data services contribute to the European Green Deal data space, facilitating data flows inside the EU and across sectors, for the common good. European programs like Copernicus, Galileo, EGNOS and INSPIRE already provide a significant amount of invaluable EO data that is currently being used by many organizations and SMEs to deliver their value-added services.
In this presentation, we will be discussing some existing projects and initiatives offering cloud infrastructures for processing Copernicus data and information like DIAS (Data and Information Access Services) and the ECMWF and EUMETSAT European Weather Cloud infrastructure for processing meteorological and satellite data close to their physical location. DestinE, the new EC coordinated initiative implemented by ESA, ECMWF and EUMETSAT will be utilising diverse HPC and Cloud infrastructures and distributed cloud-based storage infrastructures to provide access and tools for processing EO data., Finally, we will be discussing some related EC Research and Innovation Action (RIA) projects like AI4EU, AI4Copernicus and eo4eu. Their key objectives are to provide the means and tools for implementing technologies based on AI/ML, AR/VR and semantic-enhanced knowledge graphs augmenting the FAIRness of EO data and support a sophisticated representation of data entities and their dynamics.
We will talk about ESA and its plans in EO, why space EO is a key to European space economy growth and to understand and tackle climate changes. Transformative technologies that we explore and develop at the ESA ɸ-lab@ESRIN, such as Artificial Intelligence, Could and Edge computing, are instrumental to this and transforming the way we’ll use EO.
Based on CloudFerro experience from developing and operating DIAS and DIAS-like platforms, we will discuss the challenges, implemented technology solutions and future trends regarding Copernicus platforms. It will cover such important topics like:
- federation of data, resources and users
- renewable energy for processing
- cloud computing vs HPC for big EO processing
- expansion from EO data to all spatial information
- semantic layers supplementing data
- optimization of storage, computing and network consumption in federated environment
In the current Big Data and EO architectures which are utilising various EO and non-EO data sources, as well as machine learning and Artificial Intelligence tools, there is an emerging need to establish “trust” mechanisms by ensuring end-to-end data and software traceability, auditability and provenance chain. Moreover, the “EU Ethics Guidelines for Trustworthy AI”, which is a part of EU AI strategy, requires the transparency and explainability of elements relevant to the AI-driven systems: the data, the system and the processing models, as well as that the decisions made by an AI systems can be understood and traced by human beings. As such, for AI to be “trustworthy”, state the Guidelines, we must be able to understand why it behaved a certain way and why it provided a given interpretation. Today, this is still an open challenge thus a whole field of research, called Explainable AI (XAI) tries to address this issue to better understand the system’s underlying mechanisms and find solutions.
What are the implications of these requirements for EO and Copernicus data and applications? Do we need to develop new tools that can ensure the provenance of the datasets (origin and history of modifications) as well as systems for tracking of the AL/ML model training processes?
This presentation will focus on the EO data provenance which is a new concept that allows to track the EO data along the digital supply chain and capable to provide in the end-to-end traceability of data, software tools and models. In particular, the EOGuard project will be presented to showcase the use of KSI blockchain technology for verification of the “authenticity, integrity and time” of EO data products and their traceability throughout different cloud infrastructures, including results of the “proof of concept” implementation on the Copernicus DIAS cloud for the Common Agriculture Policy use cases. The question of how a similar technology can be used for auditing and certification of AI driven EO applications will also be discussed.
The European Space Agency (ESA) Arctic Weather Satellite (AWS) currently planned for launch in 2024 could be a pathfinder mission in expanding the support of the EUMETSAT Polar System Second Generation (EPS-SG) Microwave Sounder (MWS) mission to global and regional numerical weather prediction (NWP) applications.
A successful AWS in-orbit demonstration in the 2024-2025 period would represent an opportunity for EUMETSAT to expand the products’ envelope of the EPS-SG mission for the users, by considering the implementation of a constellation of satellites with microwave sounding capability based on the AWS design.
ESA and EUMETSAT are cooperating on the preparatory activities that could possibly lead to a constellation of flying AWS recurrent models, providing sounding information on global humidity and temperature profiles to the users in near real time. The Phase A activities for the constellation definition are currently ongoing at EUMETSAT, and include various complementary scientific impact assessment studies, performed by CNRS/Météo France, the European Centre for Medium-range Weather Forecasts (ECMWF) and by a Consortium led by Met Norway (MetNO), including the Swedish Meteorological and Hydrological Institute (SMHI) and the Finnish Meteorological Institute (FMI). All studies are considering the same constellation scenarios, in terms of orbits, number of satellites, and instrument sampling.
The main goal of the various studies is to assess the impact of various possible constellation designs on NWP using different methodologies and focussing on different, but complementary aspects. CNRS/Météo France is performing a series of Observing System Simulation Experiments (OSSEs) considering a realistic representation of the global observing system, with focus on global NWP impact. Forecast lead times are considered over the range of few days.
ECMWF is performing an Ensemble of Data Assimilations (EDAs) series of experiments, assessing the benefit of the various constellation scenarios by measuring the reduction in variation across different forecast ensemble members, with focus on global NWP. The EDA methodology requires only data from the various constellations that need to be simulated, including a realistic observation error. Focus is on the short term forecast error reduction.
An additional study, performed at MetNO, SMHI and FMI, includes a series of regional OSSEs aiming at estimating the expected impact of the selected constellations scenarios with focus on regional NWP at high latitudes. Other important aspects to be investigated include the support of Nowcasting (NWC) and a societal impact assessment for high latitude regions and the Arctic.
This presentation describes the objectives and status for each of these studies and gives an outlook of the various future activities to support the definition of the AWS constellation.
The Arctic Weather Satellite (AWS) aims at paving the way towards a constellation of satellites, carrying a microwave instrument each, to give input to weather forecasting with short revisit times. However, already the prototype satellite will provide a novel element, as it is equipped with four channels around the 325 GHz water vapour transition. Existing operational microwave radiometers are limited to frequencies below 200 GHz. The upcoming Ice Cloud Imager (ICI) mission will also cover this range. Presently AWS has an earlier planned launch date than ICI and the later instrument will only have three 325 GHz channels. In addition, ICI is a conically scanning instrument with a footprint of 15 km while AWS is a cross-track scanner with a resolution around nadir of about 10 km.
For clear-sky conditions, AWS' 325 GHz channels will provide a somewhat improved precision for humidity, by complementing the channels around the 183 GHz transition. There is little information on surface emissivities around 325 GHz, and it is hard to judge if the addition of this frequency range can help constrain atmospheric humidities when the 183 GHz channels are disturbed by the surface. This happens in dry conditions and at high surface elevations. In the presence of clouds, there is a much more clear synergy between the two sets of channels. In assimilation of clear-sky character, 325 GHz should be the ideal complement to filter out cloud affected 183 GHz data. In simulations, this approach has also been shown to clearly outperform existing filtering methodologies. In fact, a significant fraction of the cloud-affected 183 GHz radiances can be corrected to create synthetic cloud-free counterparts with help of the 325 GHz channels. This indicates that the combination of 183 and 325 GHz allows to constrain humidity below and inside a broader range of clouds than what is now the case with just 183 GHz at hand. Assimilation of all-sky type should be the optimal manner to make use of this synergy between 183 and 325 GHz.
The information on ice hydrometeors provided by the 325 GHz channels has also inherit value, in line with the objective of ICI. Fewer cloud variables can be constrained by AWS, as ICI has additional channels further into the sub-mm range, up to 664 GHz. Still, AWS's partly smaller footprint and varying incidence angle can be beneficial for special studies, e.g. to verify assumptions on ice particle shape and orientation, and it can thus indirectly support ICI.
Context: The arctic region is poorly served by geostationary observations. The arctic weather satellite (AWS) will provide frequent coverage of the polar regions to support nowcasting and numerical weather prediction. The AWS Prototype Flight Model (PFM) satellite is the forerunner of a potential constellation of sixteen satellites that would supply a constant stream of temperature and humidity data.
Instrumentation: Onboard the AWS four receivers (19 channels) will perform passive remote sensing of the atmosphere with frequency coverage between 50 and 325 GHz. Radiometer Physics GmbH is in charge of developing and building the 325 GHz Front-End featuring 4 channels especially designed for humidity sounding and cloud detection. It relies on extensive heritage from the MetOp-SG Ice Cloud Imager receiver developments, but considers the New Space approach to prove innovative integration concepts in a cost-effective and timely manner. The 325 GHz receiver integrates in three aluminum based modules, all DC and RF functionalities that were previously separated (ICI), allowing for strong mass and size reductions.
New Space Approach: The involvement of private companies, like RPG, and investors in the commercial space sector has led to the “New Space” development. The most significant characteristics of the “New Space” approach include the use of new production methods, the incorporation of new cutting-edge technology, modularization and standardization, and the use of commercial off-the-shelf parts, as well as the willingness to take on higher risks while still remaining flexible enough to react quickly on customer needs. This approach enables us to incorporate novel business ideas in designing and manufacturing while still save development time and reduce costs.
Here, we will present the 325 GHz AWS Front-End with special focus on the chances and implications that New Space has on its design, development and manufacture.
The ESA Raincast study is a multi-platform and multi-sensor study to address the requirement from the research and operational communities for global precipitation measurements. It aims at identifying and consolidating the science requirements for a satellite mission that could complement the existing space-based precipitation observing system and that could optimally liaise with efforts currently made by other agencies in this area. One objective in the study is to provide criteria and guidelines in the design of future missions dedicated to global snowfall quantification.
Improvement in both the monitoring of high latitude precipitation and in our understanding on microphysical and dynamical processes that influence high latitude precipitation patterns, intensity and type must be driven by concerted observations of active radars and passive microwave radiometers. This has been recently demonstrated through the development of machine learning-based algorithms for snowfall detection and retrieval, exploiting global observational datasets built from passive and active microwave spaceborne sensors. In particular, the CloudSat/Calipso-based machine learning snowfall retrieval methodology developed for the GPM Microwave Imager (GMI) (SLALOM), which has been developed within the EUMETSAT Hydrology SAF in preparation for the EPS-SG Microwave Imager (MWI) mission, has proven to be very suitable for snowfall detection and retrieval. SLALOM is able to reproduce CloudSat snowfall climatology, but with better coverage (up to 65°N/S for GMI), outperforming other state-of-the-art GPM products.
The increasing number of operational cross-track scanning radiometers in the future (e.g., EPS-SG Microwave Sounder (MWS) mission) requires dedicated efforts to study the potentials of these radiometers to improve snowfall global monitoring. Moreover, the Arctic Weather Satellites (AWS) mission, carrying a cross-track scanning microwave radiometer covering the frequency range 50–325 GHz, will provide unprecedented spatial and temporal coverage at high latitudes. In this context, SLALOM has been recently adapted and applied to the currently available most advanced cross-track scanning radiometer, the Advanced Technology Microwave Sounder (ATMS), on board Suomi NPP, NOAA-20 and the future JPSS platforms. A dedicated study has been carried out to assess ATMS snowfall observation capabilities at high latitudes. The study is based on the use of a ATMS/CloudSat-Calipso coincident observation dataset. The main findings from the study will be presented by: 1) reporting on the different scientific aspects and on the complexity related to snowfall detection and quantification in extreme dry/cold conditions (e.g., sea ice/snow cover variability), 2) analysing and providing evidence of such complexity, and 3) proposing observation and retrieval strategies to be adopted in the future to improve detection and quantification of snowfall in the Arctic, also in view of the AWS mission. These findings pave the way towards the definition of synergistic approaches exploiting the future European AWS, EarthCare and CIMR missions.
The Meteorological institutes of Denmark, Finland, Norway, and Sweden have a long history of working together to advance limited-area NWP in conditions typical to these Nordic countries. The maintenance of the state-of-the-art operational NWP systems is currently taking place in the HARMONIE-AROME framework with close links to Meteo-France and several other European National Weather Services. The development of variational data assimilation in these mesoscale NWP systems facilitates efficient exploitation of satellite measurements. In anticipation of the new polar orbiter launches within the next 2-4 years, preparations are underway for the assimilation of radiances from the Arctic Weather Satellite (AWS).
AWS is designed as a prototype satellite to demonstrate the feasibility of low-cost microwave sounding from a low-Earth orbit. A successful demonstrator mission will pave the way for a constellation of satellites, thereby providing frequent data reception over high latitude regions. The AWS demonstrator mission is scheduled for launch in 2024. While the AWS satellite has a design-life time of 5 years the ESA mission is committed only for one year of operation.
Under ESA contract a Nordic Consortium has committed to make an early performance evaluation of the AWS data, with emphasis on regional NWP and high latitudes. The project is to be kicked off in December 2021 and will finish one year after launch. To be able to do a meaningful evaluation in this very short time frame, significant research and development efforts are required to be able to bring the Nordic NWP systems ready to digest and optimally use the AWS data by the time of launch. In addition this study will also try to assess what impact a possible future AWS constellation, providing an observation frequency over the Nordic region of upto and possibly beyond 1 hour, might have on short term weather forecasting and Nowcasting.
In all Nordic Meteorological Services, microwave sounding data assimilation is performed already in clear sky conditions. The Nordic ESA study will start to monitor and experiment with active AWS radiance assimilation using both temperature- and humidity-sensitive microwave sounding channels. At the time of launch, the operational use of these frequencies is likely restricted to cloud-free conditions. But substantial efforts are devoted to eventually extending the data use to all-sky conditions and improving the usage over sea ice, land, and snow surfaces. Additionally, research will be undertaken to investigate the potential use of channels in the sub-millimetre wavelengths. For example an enhanced cloud filtering for clear-sky assimilation using the 325 GHz channels will be evaluated.
In order to have access to real-time data with low latency from shortly after launch, a ground segment is set up using the satellite Direct Readout acquisition facilities available within the domain of the Nordic Institutes.
In parallel to the development of the AWS demonstrator mission EUMETSAT is currently conducting Phase 0/A studies for a future constellation of small microwave sounding satellites. This potential AWS constellation is foreseen as an expansion to the EPS-SG programme and is expected to be put forward for decision in 2025. If approved by the EUMETSAT member states the first satellites of this constellation will start flying around 2029. To support the decision process at EUMETSAT a number of dedicated global and regional assessment studies will be performed. In a Nordic study to start early 2022 and finish mid 2023 an Observing System Simulation Experiment (OSSE) over the Nordic/Arctic region will be conducted. In addition, a first look at how AWS data can be used to support Nowcasting (independent of what can be provided by short term regional NWP) will be addressed, and a study will include first steps towards making socio-economic benefit assessments of a future AWS constellation.
Here we will present the development and research plans of the Nordic Consortium to support ESA and EUMETSAT in the evaluation of the performance of the AWS demonstrator mission data and to assess the impact of a possible future AWS constellation, with focus on Nordic regional forecasting.
Observations from microwave (MW) instruments currently provide the greatest impact of the satellite data used in the ECMWF assimilation system. However, with some satellites having already been in flight for many years combined with a lower frequency of new missions, the constellation as it stands now is expected to decline in the coming years. Recent advances in technology have allowed the possibility of launching MW sounding instruments on small satellites with a performance that is expected to be adequate for Numerical Weather Prediction (NWP). These new small satellites are expected to complement a continued backbone of larger, high performance platforms. In this study, which is carried out in collaboration with ESA, we aim to investigate different potential future constellations of small satellites carrying MW sounding instruments with a focus on the optimal design for global NWP. A range of constellations have been chosen where the different designs have been primarily motivated by two aspects. Firstly, by varying the number of satellite platforms up to a large constellation of 20 small satellites, we aim to examine how much further benefit could be achieved with improving temporal sampling. Additionally, the trade-off between humidity sounding channels only or having additional temperature sounding channels will be considered.
The impact of these possible constellations will be evaluated by the Ensemble of Data Assimilations (EDA) method. The EDA consists of running a finite number of independent cycling assimilation systems, in which observations and the forecast model are perturbed to generate different inputs for each member. The small satellite data and accompanying observation errors will be simulated and the benefit from adding the data to the observing system is measured by reducing the spread of the ensemble members which reflects improvement to the uncertainties in analyses and forecasts.
Here we present the EDA methodology and provide an overview of the simulation framework which includes use of operational resolution model fields interpolated to the small satellite observation locations and a scattering radiative transfer model to allow use of the data in an all sky framework. Several EDA experiments will be run in which the different constellations of simulated small satellite data are added to a consistent baseline observing system. This baseline comprises the full observing system but with the number of existing MW sounders reduced to only four satellite platforms - two each in the Metop and JPSS orbits - to reflect a potential reduced constellation of larger platforms in the future. Preliminary evaluation of the first EDA experiments exploring the impact of the small satellite constellations will be presented.
Session C5.02 Scalable platform architectures enabling big EO data analytics in cloud-based platform
Michał Bylicki, Monika Krzyżanowska, CloudFerro
Efficient and scalable computing cloud access to Big Data EO repositories – CREODIAS, WEkEO, CODE-DE and other platforms
Copernicus program provides users with a tremendous amount of EO data available as a public good for anyone at no cost. With an increasing amount of data, we need to provide growing capabilities to process the data and use the information derived from the data. The size of the Copernicus data makes it difficult to process the data in the local environment. To foster the usability of EO data, the DIAS platforms (Data and Information Access Services/EO platforms) were launched.
EO platforms are essential component that enhance the EO data's usability, making it possible to change data into information. The platforms provide users with a computing power that enables fast and efficient processing and allow users to access, process and analyse Copernicus products in the cloud directly connected to the EO archive. The availability of data, storage, and processing in a single place makes it easier to develop and distribute scalable digital services based on the EO data.
We have built and operate several EO platforms based on a hybrid cloud in an Infrastructure-as-a-service model. Our services are based on open-source components, such as Open-Stack for cloud orchestration, Ceph for storage management, Prometheus and Graphana for monitoring, K8s for processing and others. Within this session, we will present the challenges of building cloud platforms with direct access to big data EO repositories. On the example of CREODIAS, WEkEO, CODE-DE and other platforms built and operated by CloudFerro, we will present the main building blocks and architecture of entirely European cloud platforms based on open-source software. We will show how we managed to create a new service - the EO platform as a service. We will as well discuss how such services can become part of the emerging European federated clouds and data ecosystem.
Federation and multi-platform approach allow users and operators to benefit from the synergy between different platforms, such as shared multi PB data repository available from various platforms and elasticity cloud components for additional processing needs. At the same time, it generates a technical challenge that we need to address. We will present several such challenges and outline our approach to face them, focusing (but not limiting to) on three main topics:
• Interoperability of the environments:
how to support easy migration between different clouds and provide direct access to the repositories available from different computing clouds, with multi-cloud processing enabled and dynamically adaptable usage and payment models.
• Data access and dissemination:
the multi-petabyte, scalable EO data repositories should provide fast, instantaneous access to many heterogeneous users and algorithms with different needs. At the same time, the data needs to be discoverable and ready for analysis in a context of constantly evolving data offers (new collections, commercial data, products), and users should be supported in data dissemination.
• Cloud efficiency:
how to optimize data access, storage, and processing costs and energy usage both for individual users and for the entire ecosystem.
In our presentation, we will present our approach to these challenges from the EO platform provider point of view. As an EO platforms provider active in a fast-growing and developing market, we need to constantly add and improve new services as required by the customers in order to compete with the biggest players. It is crucial to provide more advanced services and not to compete with our customers at the same time. We will discuss the perspectives and responsibilities of EO platform providers. We shall complement the presentation with a short discussion of the applicability of computing cloud vs HPC for EO processing. We will terminate with our vision of the emerging federated cloud and data ‘green’ ecosystem.
CLEOS (Cloud Erth Observtion Services) is e-GEOS satellite data and information platform offering access to a wide variety of geospatial datasets and enabling the development, test and large scale deplyment of geospatial data prcoessing pipelines. CLEOS adopts a multicloud approach and fosters interoprability through standards and practixs such as OpenEO for APIs and STAC for metadata catalogue.
High Level Architecture.
CLEOS Architecture is modular and it is based on five main components:
• The Marketplace, where Customers can access available products and services through a user-friendly UI;
• The API Layer, which exposes purchasing, processing and data access capabilities through a RESTful web interface, also enabling machine-to-machine integration with external systems;
• The Processing Platform, which orchestrates the scalable execution of Data Processing Pipelines for both Optical and SAR data, using e-GEOS proprietary algorithms and Third-Party tools. It includes the AI Factory for the management and development of AI-based applications;
• The Data Layer that all entire the metadata catalogue of the federated data sources connected to the infrastructure and a pool of locally available assets and resources, including EO and non-EO data;
• The Help Desk, used by e-GEOS Customer Service to provide the necessary support to CLEOS Customers.
CLEOS accesses multiple satellite and non-satellite data sources through a set of data collectors adapted to the interfaces offered by each data/contents provider (e.g. satellite missions Ground Segments). CLEOS also has a multi-cloud orchestration service that allows the deployment of the whole CLEOS platform or of single processing jobs in different commercial cloud infrastructures, including all Copernicus DIAS.
Design principles and innovations.
CLEOS Platform design adheres to technical design principles widely shared in the geospatial community.
1. Multi-Cloud & Data Locality. Space Earth Observation data are notably very large datasets (e.g. Sentinel-2 satellites acquire about 10 TB of data daily). The “data gravity” associated with these huge data archives requires a shift in the processing paradigm, which means bringing the processing close to the data, to minimize network congestion and increase the throughput of the overall system. This is called the Data Locality principle. However, space and geospatial data are not located in a single infrastructure, since today there are several endpoints offering access to the same datasets (e.g. Sentinel-2 data can be accessed in AWS, Google Cloud and five Copernicus DIAS). Additionally, the selection of the infrastructure where to access and/or process data is driven by multiple considerations: not only price/performance, but also constraints from Customers (including the option to process data in a local infrastructure for certain workloads).
This scenario had the following impact on CLEOS design:
• The Data Catalogue needs to register multiple endpoints where to access the same resource. CLEOS has adopted Spatio Temporal Asset Catalog (STAC) metadata structure, as it allows extensible definition of spatial assets and resources enhancing indexing and discovery process standardization;
• The Processing Platform must have the flexibility to pilot processing requests on different cloud platforms, including on-premises infrastructures, allowing also hybrid cloud workloads. CLEOS has adopted the Max-ICS platform, which exploits open source frameworks such as Mesos, Marathon, Puppet, and Terraform, to manage Infrastructure as a Service (IaaS) resources in multiple cloud infrastructures, abstracting the complexity of the platform control and service orchestration layers.
2.Elasticity&Scalability. Geospatial data processing use-cases can define a great variety of workloads types, from large batch processes in case of multi-temporal analysis over large areas, to synchronous data analytics requests of newly acquired data, or even stream processing. The simultaneous management of such heterogeneous workloads requires a strong optimization of resources usage and deployment. This scenario requires CLEOS to be able to scale up and down the available worker nodes, according to the active workload in an elastic way. CLEOS infrastructure is based on microservices, fragmenting complex workflows into elementary steps that can executed by independent nodes and easily orchestrated. Therefore CLEOS infrastructure can dynamically scale-up those microservices to fulfill the increasing number of processing jobs, deploying ultimately new on demand virtual hosts exploiting the elastic resources allocation made available by almost all cloud providers.
3.Microservices and Data Processing Pipelines. In CLEOS all the processing services are made available by nodes and pipelines. Nodes (microservices) and pipelines (set of nodes linked to each other to perform a workflow) are the two main components upon which to build a collaborative ecosystem in which CLEOS developers can reuse available standalone blocks to create new services with a modular LEGO logic. The definition of Data Processing Pipelines is central in modern platforms, as it allows the design and implementation of a workflow that is activated once data reach the first node of the pipeline, flow through it to produce a result that can be used as the input of another pipeline or be delivered to the user.
4.Platform Federation & Interoperability. Today, a platform cannot behave as a standalone system, but it needs to be interoperable and federated with other platforms on the two sides of the market:
• Supply: the platform needs to be able to access to several heterogeneous suppliers of data and services;
• Demand: the platform needs to offer its data and services to heterogeneous customers that, more and more often, will be other platforms and not humans.
To achieve this objective, the API layer plays a central role. In particular, CLEOS has adopted also the OpenEO standard to manage the end-to-end process of searching, configuring, buying, monitoring and accessing data and services. This choice was made to enable the OpenEO clients to connect with CLEOS with minimal effort. Through this API definition, it is possible to unify the access to the different service backends, abstracting proprietary implementations made by each vendor with their own internal interfaces.
CLEOS Data Layer
CLEOS Data Layer is a set of modules dedicated to the storage and cataloguing (Technical Catalogue) of available resources, being them data, metadata or capabilities available through the Processing Platform. The CLEOS storage relies on available Object Storage services in different cloud infrastructures, since most of data collections are available at different endpoints. Therefore, the Technical Catalogue in the Data Layer has a paramount importance, insofar it allows the Marketplace and the Processing Platform to have a unique point of reference about what resources/services are available and how/where they can be accessed. Those catalogues follow the STAC (SpatioTemporal Asset Catalog) specification. The aim of STAC is to define a common language to describe a range of geospatial informations, so they can more easily be indexed and discovered. CLEOS users/developers will take all the benefits of this adoption that prevents them from writing new code each time a new data set or service is available. The Data Layer also offers methods and interfaces (API in the OpenEO standard) to access available data in multiple ways and for multiple purposes (View, Download, Subset, …)
CLEOS Processing Platform, Developer Portal & AI Factory
CLEOS Processing Platform is the module that is responsible for execution of all processing tasks, from the simple retrieval and delivery of a product up to the orchestration of complex and long batch processing jobs involving thousands of processing nodes. The Processing Platform operates and deploys the Processing Pipelines created in the Developer Portal. The Developer Portal is the environment where e-GEOS and external developers can build new Processing Pipelines re-using available modules and building blocks, taking advantage of an Interactive Development Environment (IDE) and of CLEOS API to streamline data access and processing operations
The Processing Platform is able to:
• manage the provision of the necessary infrastructure resources in a dynamic way with elastic scaling up & down in multiple cloud and on premise infrastructure;
• manage the processing pipelines using a data driven and message-based approach where requests are queued and progressively managed, enlarging or reducing the size of the available worker nodes according to the demand;
• manage DevOps through a Continuous-Integration/Continuous-Development (CI/CD) pipeline • monitor the resources usage, node by node, pipeline by pipeline and infrastructure wise
• centralize all the microservices logs so that they can be accessed, filtered and analysed cost-effectively While the processing jobs follow as a stream due to backend event based architecture, CLEOS Processing Platform offers also the capabilities to retrieve and control the overall status of a request managing also customers notifications and updates.
At last, the AI Factory is the platform section dedicated to the development and management of AI models and corpus, where AI developers and users work together to develop, test and scale new AI based applications.
The AI Factory allows:
• to access a large set of pre-defined AI models or to import custom ones;
• to import training corpus or to build new ones using a simple and intuitive interface;
• to train, re-train models, benchmark performance metrics and manage model versioning;
• to directly include trained AI models into processing pipelines via their relative inference nodes.
Capturing video from satellites is one of the most exciting innovations to hit the remote sensing world in recent times. High-resolution, full-colour EO video is enabling fundamental and disruptive changes for the Geospatial Intelligence and Earth Observation industries. When EO video analytics are combined with complementary geospatial information sources, the results can generate powerful information for end users.
EO based video delivers a new dimension of temporal information that captures instantaneous motion on and above the Earth’s surface. This allows determination of direction, speed and rate of change of moving targets on or above the ground. This is a novel observational capability that can potentially change the way we view our planet by enabling new types of analyses and could drastically improve situational awareness, critical decision-making and improved forecasting.
Earth-i and CGI are currently developing the EO Video Analytics and Exploitation Platform (VANTAGE), funded by the European Space Agency, to promote and enable the widespread exploitation of EO video. CGI provides unparalleled expertise in the development and deployment of Exploitation Platforms, whilst Earth-i has a proven track record of delivering powerful analysis of EO imagery and video, using advanced analytics and Artificial Intelligence.
The ultimate aim of VANTAGE is for users to be able to upload their own data and algorithms and use the platform for processing and interrogating that data in conjunction with EO video, and export the results back to their own working environments. VANTAGE is being built using a cloud-based EO platform architecture that enables scalable processing and analytics, tailored for large-volume satellite video data. The platform provides an archive catalogue of over 280 full-colour high-resolution EO videos for the users to interrogate, and a library of predefined analytics tools that are tailored to extract value from EO video, alongside Jupyter Hub integration for collaboration and interactive development of customised user workflows.
To showcase the types of capabilities that the platform can offer, Earth-i and CGI have been developing a number of use cases, including topics such as deforestation, urban sprawl and construction monitoring. Example workflows are being deployed, showing potential users from research, commercial business or public sector organisations how to extract unique information from EO video. A coding challenge was held in November 2021 and another is planned for February 2022, the results of which will be presented.
In this presentation, Earth-i will provide an overview of VANTAGE and its leading edge architecture and functionality, and we will demonstrate some of the key functionality, showing how users can extract cloud-free data from an EO video, or identify and track moving objects in the videos, or construct a 3D model from the video data. We will show how to use EO video data in combination with traditional satellite data acquisitions to derive higher level information products.
In recent years, advances have been made for users e.g., in Marine Safety to make Earth Observation (EO) data available shortly after data capture. However, it requires considerable infrastructure that is usually not available to the wider community. Near real-time (NRT) exploitation of EO data requires reducing the time from sensing to actionable information, including downlinking, processing and delivery of imagery and value-added data. Full utilization of the NRT potential must therefore include optimal exploitation of satellite capabilities, ground station availability, a flexible and scalable processing environment and reducing the necessary computations by intelligent data selection early in the pipeline (for example based on a user’s area of interest).
The ESA funded EOPORT project implements a cloud native subscription-based NRT exploitation platform for EO data, demonstrating the concept with live Sentinel-1 data. Today, users obtaining Sentinel-1 data on the Copernicus Open Access Hub or the national ground segments are used to getting data within 3 hours (NRT-3h) or 24 hours (NRT-24h). However, over Europe Sentinel-1 is often operated in passthrough mode (NRT-Direct), in which data is downlinked to a ground station directly following sensor data capture.
Exploitation of passthrough data have traditionally been limited to ground station providers. EOPORT offers this capability by providing a low-cost entry to exploiting true NRT data in a public cloud environment. The capabilities complement and can be federated with existing platform initiatives based on published EO products, such as the Thematic Exploitation Platforms (TEPs) and the Data and Information Access Services (DIASes).
The authors will demonstrate the state-of-the-art architecture, scalable processing and pilot achievements, demonstrating downlinking of passthrough Sentinel-1 data at KSAT’s ground station in Tromsø, making raw data chunks available within seconds on T-Systems’ Open Telekom Cloud, producing Level-1 data with Kongsberg Defence & Aerospace’s NRTSAR processor within just a few minutes after sensing. The highly scalable implementation on a Kubernetes cluster ensures that any processing stage is stated and run in parallel, as soon as sufficient measurement data and auxiliary data is available.
There are many web-based platforms offering access to a wealth of satellite earth observation (EO) data. Increasingly, these are collocated with cloud computing resources and applications for exploiting the data. Users are beginning to appreciate the advantages of processing close to the data, some maintaining accounts on multiple platforms. The ‘Exploitation Platform’ concept derives from the need to process an ever-growing volume of data. Rather than download the data, the exploitation platform offers a cloud environment with hosted EO data and associated compute and tools that facilitate the analysis and processing close-to-the-data.
The Users benefit from the data proximity, scalability and performance of the cloud infrastructure – and avoid the need to maintain their own hardware. The Infrastructure Providers gain an increased cloud user base. The Data hosted in the cloud infrastructure reaches a wider audience.
In order to fully exploit the potential of these complementary resources we anticipate the need to encourage interoperation amongst the platforms, such that users of one platform may consume the services of another directly platform-to-platform. This leads to an open network of resources, facilitating easier access and more efficient exploitation of the rapidly growing body of EO and other data.
Thus, the goal of the Common Architecture is to define and agree a re-usable exploitation platform architecture using open interfaces to encourage interoperation and federation within this Network of Resources. Interoperability through open standards is a key guiding force for the Common Architecture:
• Platform developers are more likely to invest their efforts in standard implementations that have wide usage
• Off the shelf solutions are more likely to be found for standards-based solutions
The system architecture is designed to meet a set of defined use cases for various levels of user, from expert application developers to consumers. The main system functionalities are organised into high-level domain areas: 'User Management', 'Processing & Chaining' and 'Resource Management'.
We are developing an open source Reference Implementation, to validate and refine the architecture, and to provide an implementation to the community.
Our solution comprises a OGC API Processes engine that uses Kubernetes to provide an auto-scaling compute solution for ad-hoc analytics, systematic and bulk processing - supported by a hosted processor development environment as an assist to expert users.
Data and applications are discovered through a resource catalogue offering STAC, OGC CSW & API Records, and OpenSearch interfaces. A user-centred workspace provides catalogue and data access interfaces through which the user exploits their own added value products.
For platforms to successfully interoperate they must federate user access in order for requests between services to respect the user's authorisation scope and to account for their actions. Access to resources is secured by means of our identity and access management framework, which uses OpenID Connect and User Managed Access standards to enforce all access attempts in accordance with configured policy rules.
This presentation will highlight the generalised architecture, standards, best practice and open source software components available.
Academic Technology transfer (TT) is an important source of innovation and economic development. The TT success depends on quick and easy adoption of a scientific innovation by an external partner. At FERN.Lab, Helmholtz innovation lab for TT, we use serverless Micro-Service Architectures (MSA) as the main structural style for successful development and transfer of Earth Observation (EO) technology.
MSA allows us to quickly bootstrap from a research prototype to a product, which fits the requirements of different customers ranging from public governmental sectors to private companies, in a question of months. The adopted architecture is easy to integrate into existent workflows, suitable for agile project management, understandable and reusable to facilitate collaborations, and modular to take advantage of new technologies. The latter allows us to integrate various Geo-spatial processing services which contribute to the overall functionality.
At FERN.Lab we believe that TT activities can only have large impact if innovation is continuously integrated into the product development cycles. An MSA allows us to isolate the work of a scientist and it abstracts the scientist from the complexity of an infrastructure such as a cloud. Such isolation combined with the computing environments offered by commercial cloud providers (Google, Amazon, Microsoft) makes it possible to leverage cloud main features without requiring both users and developers a deep knowledge on distributed computing, and thus let the scientist to solely focus in the core of the technology.
The ecosystem of computation environments is rich, Amazon provides AWS Lambda and Fargate, Google offers App Engine and Cloud Run, etc. Besides compute environments for micro-services, cloud providers, such as Amazon and Google, also have large archives of remote sensing data, such as Sentinel-2 and Landsat, which provide easy and efficient access to terabytes of data. Via standard catalog specifications, such as the SpatioTemporal Asset Catalog (STAC), users easily identify which blocks need to be downloaded, or in case of Cloud Optimized GeoTiffs (COGs), streamed for processing. Due to this easy and efficient access to data, micro-services can run in stateless containers and thus, open doors for serverless micro-services.
We present exemplary the use of MSA for habitat classification, land cover prediction, and land surface temperature homogenization. Our work shows the benefits and challenges in accessing and processing large EO data sets from different satellite sensors, Sentinel-1 and -2, Landsat-7, -8, and -9, and ECOSTRESS using different compute environments on Google Cloud. For each of the use-cases, the scientist algorithm is containerized and registered to either be deployed as a micro-service or as a function. For easy portability and interoperability, the algorithms interface is a WebAPI.
The algorithm needs to be stateless and its memory- and storage-footprint should not exceed the resources of the computation node. Despite the amount of resources available for an event driven computation have increased considerably, as an example, recently Google Cloud Run has doubled the amount of available memory to 16-GB main-memory, it is not always possible to deploy a service. A straightforward solution is often data partitioning, but in some cases, it is still not enough. For these situations the algorithm is instead registered as a function in OpenFaaS. With OpenFaaS on Kubernetes the resource limits are easily overcome at the cost of having a more complex setup and higher processing costs.
With service/function registered in a compute environment, their deployment is triggered by calls from a queue system. Such calls are tasks containing a set of parameters defined either by a user via an UI or a workflow from an external partner. With different levels of integration and involvement of external users and developers from four companies and one NGO, we show how MSA helped us to develop and transfer efficient, scalable, and cost-efficient products.
The interaction between plasma of solar origin and the Earth’s magnetosphere-ionosphere (MI) system is at the base of several phenomena relevant to Space Weather. Among them, the MI coupling plays a fundamental role, as it allows energy and momentum to be exchanged between magnetosphere and ionosphere, and thus provides a mechanism able to modify the energy budget of the ionosphere. In fact, part of the energy injected into the ionosphere may be converted into mechanical energy and dissipated via Joule heating. Such a dissipation may significantly affect temperature, density, and the composition of the upper ionosphere, resulting, for instance, in an increased atmospheric drag affecting the satellite orbits. The MI coupling is mainly realized by means of field-aligned currents (FACs) that connect the magnetosphere with the high-latitude ionosphere. Thus, understanding the dissipation of these currents in the ionosphere may help to clarify their contribution to the energetic budget and to shed light on some physical processes still unclear. Generally speaking, the work done by electromagnetic fields on a charge distribution is given by the dot product of current density, J, and the electric field, E, which describes the conversion of electromagnetic energy into mechanical energy. In the direction parallel to the geomagnetic field, under the reasonable assumptions of quasi-neutrality and neglecting higher order terms, the only relevant contribution to the conversion term is given by Ohmic dissipation and the consequent Joule heating. This quantity can be computed by using Swarm data instead of models. Here, for the first time, we show statistical maps of power density features dissipated by FACs via Joule heating at Swarm A altitudes (about 460 km). This is realised by using six-year time series of electron density and temperature data acquired by the Langmuir Probes onboard the Swarm A satellite at 1 s cadence together with the field-aligned current density product provided by the ESA’s Swarm Team at the same cadence. Maps of the same quantity under different levels of geomagnetic activity are also shown and discussed in light of the previous literature.
This work is partially supported by the Italian National Program for Antarctic Research under contract N. PNRA18 00289-SPIRiT and by the Italian MIUR-PRIN grant 2017APKP7T on "Circumterrestrial Environment: Impact of Sun-Earth Interaction".
Height-integrated ionospheric Pedersen and Hall conductances play a major role in ionospheric electrodynamics and Magnetosphere-Ionosphere (MI) coupling. Especially the Pedersen conductance is a crucial parameter in estimating ionospheric Joule heating, which is an important energy sink of the whole MI-system. Increased Joule heating during geomagnetic storms and substorms has large impact on the thermospheric chemical composition and circulation, and can also affect the atmospheric drag experianced by satellites at low Earth orbits. Unfortunately, the conductances are rather difficult to measure directly in extended regions, so statistical models and various proxies are often used.
We discuss a method for estimating the Pedersen Conductance from magnetic and electric field data provided by the Swarm satellites. We need to assume that the height-integrated Pedersen current is identical to the curl-free part of the height integrated ionospheric horizontal current density, which is strictly valid only if the conductance gradients are parallel to the electric field. This may not be a valid assumption in individual cases but could be a good approximation in a statistical sense. Further assuming that the cross-track magnetic disturbance measured by Swarm is mostly produced by field-aligned currents and not affected by ionospheric electrojets, we can use the cross-track ion velocity and the magnetic perturbation to directly estimate the height-integrated Pedersen conductance. The same relationship is valid both in the case of quasi-static MI-coupling and idealized Alfven wave reflection at the ionospheric boundary.
We present results of a statistical study utilizing 7 years of ion velocity and magnetic field data from the Swarm-A and Swarm-B satellites. Carefull selection and filtering of the data are required, which tends to limit the analysis to regions where the ion velocity and magnetic disturbances are reasonably strong, i.e. close to the auroral ovals. Statistical Pedersen conductance maps are derived for the Northern and Southern auroral regions during different seasons and levels of geomagnetic activity. We discuss possible applications of the results and possible limitations of the method.
Extreme lightning activity can cause electromagnetic fluctuations that propagate from the lower atmosphere up to the ionosphere. This process requires conversion of the electromagnetic perturbation propagating in the neutral atmosphere into an ionospheric plasma wave, which for ultralow frequencies (ULF) can attenuate strongly the wave amplitude. Additionally, thunderstorms have an observable effect on the ionosphere, which is manifested by small scale irregularities in the ionospheric plasma.
The ESA Swarm mission has been designed to provide high-precision registrations of the Earth's magnetic field. Since its launch in November 2013, the constellation, consisting of three LEO satellites, measures the magnetic signals that stem from Earth's core, mantle,
crust, oceans, ionosphere, and magnetosphere. However instruments onboard Swarm allow to expand main goals of the mission and give capabilities to capture signals originating from strong lightning events. In particular Swarm occurred to be quite effective in detection of transient luminous events (TLEs). In contrast to previous and on-going satellite missions fully dedicated to TLE, it is not a straightforward task for Swarm to capture such signals on a regular basis and 20 ms time resolution is the main limitation. However, Swarm allows for detection of continuing currents following the most intense +CG strokes. Having simultaneous registrations of ionospheric plasma parameters and magnetic field scintillations we are able to conduct systematic study to answer the question whether TLEs trigger perturbations in ionospheric plasma parameters in the thunderstorm active region?
The study concentrates on the longitudinal sector of both Americas, spanning through regions which belong to areas of most frequent occurrence of ionospheric scintillations, as well as lightning hotspots. In the joint analysis of in-situ registration of plasma parameters and magnetic field components from the ESA Swarm mission together with lightning observations derived from the GLM instrument on-board the meteorological GOES-R satellite, we provide classification of two types of ionospheric responses to the intensified thunderstorm activity. We reveal links between lightning and ionospheric wave properties that suggest a real causality relationship between the two phenomena.
Finally, in the context of upcoming launch of the Meteosat Third Generation (MTG): Lightning Imager, analysis shows, that synergy between in-situ ionospheric observations from the low-Earth orbit satellite and lightning imager will be beneficial for better understanding of upper ionospheric irregularities developing in the African thunderstorm cells as well as for detection of severe thunderstorms in the European region.
Ionospheric irregularities are structures or fluctuations of plasma density in different scale sizes. These irregularities can disrupt radio waves and produce an error for space-based or ground-based technologies which depend on the GNSS/ GPS signals.
Post-sunset ionospheric plasma irregularities are a common characteristic of the equatorial ionosphere. These irregularities, associated with plasma bubbles, are defined as density depletions relative to the background plasma as determined by in situ measurements. The physical mechanism which leads to the formation of plasma bubbles has been studied extensively. A seed density perturbation leads to the nonlinear Rayleigh-Taylor instability in the bottom side of the F region ionosphere. The plasma perturbation can grow spatially and reach a higher altitude in the ionosphere. One of the main features of a plasma bubble is the existence of power-law scaling in its energy power spectrum. This feature suggests that the spatial behavior of plasma bubbles can be compared and modeled with turbulent models.
The bifurcation process is a common feature of an evolving and turbulent flow. This process has been considered in the modeling of plasma bubbles in several studies. In situ bifurcated plasma bubbles have been detected with the San Marco D satellite in the west-east direction. Some examples of bifurcation-like processes have been observed in the field-aligned plasma bubbles using the Swarm Langmuir probes. In a bifurcated plasma bubble process, we need to have information about the measured plasma density and direction of flow to understand how the topology of a plasma bubble evolves to distinguish a bifurcated plasma bubble from a shrinking or expanding bubble. We use the Swarm Electric Field Instrument data to look at the flows' direction in one of the bifurcation-like plasma bubble events. In most cases, we detect an Alfvén wave inside the walls of plasma bubbles. In this study, we compare the properties of the low-frequency fluctuations in adjacent bifurcated regions.
Most satellites are equipped with magnetometers as part of their attitude and orbit control system (AOCS).
The measurements taken by these platform or navigational magnetometers are not only useful for coarse attitude determination for satellite operation but can also be used for scientific investigations, provided they have been properly calibrated.
This contribution describes the preprocessing and calibration of platform magnetometer data collected by CryoSat-2, GRACE and other Low-Earth orbiting satellites, and how they can be used for studying the dynamics of electric current systems in Earth's environment. We will also report on experiments of calibrating magnetic data of larger satellite fleets, and how existing and future ESA LEO satellites can be used for mapping Geospace and Space-Weather applications.
As an application we show a combination of magnetic field observations from the satellites CryoSat-2, GRACE, GRACE-FO with magnetic data taken by ESA’s Swarm satellite trio, in order to obtain an improved description of the space-time structure of ionospheric and magnetospheric current systems.
Fully calibrated data from CryoSat-2 (for the years 2010 - 2019) and the GRACE satellite duo (for the years 2008 - 2017) are freely available as daily files in CDF format at https://swarm-diss.eo.esa.int/#swarm%2F%23CryoSat-2 and ftp.spacecenter.dk/data/magnetic-satellites/GRACE/.
The actual state and variability of the Earth’s ionosphere are important aspects of the space weather system. Their understanding is crucial for ionospheric modelling and building the capability of predicting and mitigating severe space weather effects. One example of such effects is the degradation of communication or positioning with the Global Navigation Satellite Systems (GNSS), which is due to ionospheric plasma irregularities impacting the propagation of radio waves. Ionospheric irregularities at various scales are a result of dynamic processes in the ionosphere.
Through the project Swarm Variability of Ionospheric Plasma (Swarm-VIP), as a part of the Swarm+ 4DIonosphere initiative, we provide spatiotemporal characteristics of ionospheric plasma at different geomagnetic latitudes and uncover coupling between various scales in response to geomagnetic conditions. The project employs data from the Swarm satellites, such as the IPIR dataset [1,2], as well as auxiliary datasets. Taking advantage of the orbital characteristics of the Swarm satellites and using complementary scale analysis techniques such as wavelets or Fast Iterative Filtering, we ascertain the dominant scales at given geomagnetic conditions. Our focus is primarily on the characteristics of ionospheric plasma, i.e., plasma density and total electron content as measured respectively by the Langmuir probes and GPS receivers onboard Swarm satellites.
The result of Swarm-VIP is a semi-empiric model for the ionosphere based on the generalized linear modeling. The model determines the probability of occurrence of different scales in ionospheric plasmas with respect to geomagnetic conditions and the magnetosphere-ionosphere coupling. It also gives insight into ionospheric structuring and coupling between scales. The model can be understood in the context of space weather effects, such as scintillations of trans-ionospheric radio signals. The Swarm-VIP model is provided globally, along the whole orbits of the Swarm satellites, and a special emphasis is put on high latitudes, Arctic and Antarctica, and the European sector, where the validation study is carried out with a network of ground-based instruments.
The Swarm VIP project is funded by the European Space Agency’s in the Swarm+ 4DIonosphere framework (Contract No. 4000130562/20/I-DT).
References
[1] A. Spicher, L.B.N. Clausen, W.J. Miloch, V. Lofstad, Y. Jin, and J.I. Moen, Interhemispheric study of polar cap patch occurrence based on Swarm in situ data, J. Geophys. Res. Space Physics, 122, (2017), pp. 3837– 3851, doi:10.1002/2016JA023750.
[2] Y. Jin, C. Xiong, L. B. N. Clausen, A. Spicher, D. Kotova, S. Brask, G. Kervalishvili, C. Stolle, W.J. Miloch, Ionospheric plasma irregularities based on in-situ measurements from the Swarm satellites. J. Geophys. Res. Space Physics, 124, (2020), e2020JA028103. https://doi.org/10.1029/2020JA028103
Covering almost half of the European Union (EU), farmland has undergone major declines in biodiversity due to intensification of agriculture. This loss affects the delivery of biodiversity-mediated ecosystem services such as pollination, in turn affecting crop yield. In the EU, implementation of agri-environmental schemes through the common agricultural policy (CAP) aim to mitigate biodiversity loss in farmlands. However, the outcomes of these measures are context dependent and vary in time and space. Proper targeting of such measures and monitoring their outcomes requires knowledge of the local ecosystem down to the species level. For example, the presence of key plant species groups can determine the outcome of flower-strips, field margins and semi-natural grassland (Cole 2020).
Recent technological developments have led to computer vision-based plant species identification tools with increasing accuracy. Applications such as Pl@ntNet (Affouard 2017) allow users to determine the species of a plant from a picture. Currently, these tools are mostly used in citizen science projects and by the public. The increasing accuracy of such methods present an opportunity of integrating automated species recognition into larger monitoring schemes – potentially providing biodiversity data at spatial and temporal scales complementary to remote sensing based monitoring of the agricultural landscape. In the future, integrating such methodologies into monitoring frameworks could contribute to CAP agri-environmental schemes and baselines for biodiversity in European farmlands.
In this study we evaluate the integration of computer-vision based methods into larger biodiversity monitoring schemes. Using the LUCAS grassland survey (Sutcliff 2019) as an example we aim to reproduce variables collected in the field (number of flowering plants, presence of key species) with image recognition algorithms.
Images acquired during the survey representing grasslands throughout the EU are used to create a training (200 images) and test (50 images) dataset. We train an object detection algorithm to recognize and locate flowers (Faster R-CNN with pre-trained COCO weights). All flowers in the images are delineated and validated with the CVAT tool. To push the accuracies of the model the training is done in two steps. In the first step we train on full images and preform a hyperparameter tuning to determine the best model settings with performance metrics done on the test set. Since these kind of architectures for object detection have some problems to overcome overpopulated images, at a second step we split all the dataset into smaller slices and revalidate the objects inferred by the net with CVAT. With this new split dataset, we retrain the model for 10K iterations. Once trained, we extract the final accuracy metrics for the model.
Using the final model, we extract all flowers predicted from the selected set of LUCAS grassland images. Each flower detected is then used as an input to the Pl@ntNet application, an image classifications neural net, to determine the species. Using a threshold for the Pl@ntnet probability score to guarantee certainty on the Pl@ntNet predictions, we compile a list of species in each image. After matching legends with LUCAS survey, we can compare species detected using computer vision methods with the species reported by surveyors. By comparing our results with the surveyor data, we evaluate the accuracy of computer vision-based monitoring of grasslands. We discuss the limitations of this methodology and provide recommendations on how to better integrate computer vision-based tools in large scale biodiversity monitoring of grassland flowering plants.
Successful integration of automated species recognition into monitoring schemes would allow for large scale and high frequency monitoring of biodiversity in agricultural landscapes.
References:
Cole, L. J., Kleijn, D., Dicks, L. v., Stout, J. C., Potts, S. G., Albrecht, M., Balzan, M. v., Bartomeus, I., Bebeli, P. J., Bevk, D., Biesmeijer, J. C., Chlebo, R., Dautartė, A., Emmanouil, N., Hartfield, C., Holland, J. M., Holzschuh, A., Knoben, N. T. J., Kovács-Hostyánszki, A., … Scheper, J. (2020). A critical analysis of the potential for EU Common Agricultural Policy measures to support wild pollinators on farmland. Journal of Applied Ecology, 57(4), 681–694. https://doi.org/10.1111/1365-2664.13572
Sutcliffe, L. M. E., Schraml, A., Eiselt, B., & Oppermann, R. (2019). The LUCAS Grassland Module Pilot – qualitative monitoring of grassland in Europe. Palearctic Grasslands, 40, 27–31. https://doi.org/10.21570/EDGG.PG.40.27-31
Affouard, A., Goëau, H., Bonnet, P., Lombardo, J., Joly, A., Lombardo, J., & Joly, A. (2017). Pl@ntNet app in the era of deep learning.
Dating of phenological events, such as the beginning of greening in spring, falling of leaves in autumn, and the growing season length, is essential for the understanding of ecosystem dynamics and the interplay of different species. Therefore, land surface phenology (LSP) and phenological date estimates using remote sensors have been a subject of many studies (Berra et al., 2021), especially using optical satellite imagery (e.g., Landsat, Sentinel-2, VHRR, MODIS, SPOT, MERIS, VIIRS). Orbital sensors have the capability to provide global coverage data for large-scale phenological research. However, accurate estimates of the beginning of the spring and especially the autumn timing remain a challenge. For instance, many phenological events cannot be directly detected with the present temporal and spatial resolutions that are available from satellite imagery. Furthermore, different methodologies to derive LSP can be applied, resulting in distinct phenological dates estimates and leading to substantial uncertainties in understanding the interaction of phenological responses and ecosystem events and changes (Richardson et al., 2018, Berra et al., 2021). Overall, the estimated LSP based on satellite images requires a comparison with independent and high-resolution spatial-temporal data to support phenological modeling efforts. Over the past few years, the lack of adequate ground-scale phenological observations have been highlighted as a major challenge for interpretating and validating phenological date estimations derived from satellite time-series (Cleland et al., 2007, Richardson et al., 2018, Berra et al., 2021).
Recently, close-range remote sensing technology has advanced significantly in terms of data accuracy, and automatic data collection and storage. This has enabled robust continuous monitoring of wide range of phenomena, like forest vegetation dynamics. Therefore, close-range sensors, such as Phenocams (Richardson et al., 2018) and terrestrial laser scanners (Calders et al., 2015, Campos et al., 2021) can be considered the missing link between automated ground-based observations and satellite-based imagery. Here, we present a LiDAR phenology station (LiPhe) built by the National Land Survey of Finland. The LiPhe station is a potential source of accurate ground-scale phenological time-series observations that supports comparisons and analyses between satellite-derived spectral observations and tree-level phenomena in boreal forests. The LiPhe station was installed in February 2020 at a traditional 111-year-old (since 1910) Finnish research forest (Hyytiälä, 61°51´N, 24°17´E). The LiPhe station comprises a Riegl VZ-2000 scanner (RIEGL Measurement Systems, Horn, Austria), which has provided a long-term TLS time series (1.5 years) with high spatial (cm-level) and temporal (hour-level) resolution observations. Additional information on tree growth is provided by 50-point dendrometers that measure tree growth at micrometer-level at a 15 min temporal resolution. More technical details about LiPhe station can be found at Campos et al. (2021). The main features of the LiPhe station data that supports its use as a complementary ground reference source for satellite-derived spectral observations are the high-frequency data acquisitions combined with robust, illumination invariant observations at any time of the year. These properties allow the LiPhe station to detect phenological changes at an individual tree-level over an area of about four hectares. The phenological dates can be estimated from the time-series data collected with the LiPhe station by analyzing the quantitative spatial (canopy volumetric changes) and radiometric (TLS reflectance response) variations in the data over time. These variations and their magnitudes can be further cross-correlated with other ground observations, such as continuous weather parameters and the site’s species structure, thus, to explain the main phenomena driving the phenological changes.
In this work, we aim to evaluate the temporal accuracy of phenological dates estimations from the LiPhe station data and to assess their usability in calibrating large-scale phenological observations in boreal forests. In this regard, we compare different phenological date estimates derived from the LiPhe station and optical satellite imagery acquired over the same study area based on machine learning methods. Machine learning has been shown to have a great potentiality for modeling time-series datasets and detecting phenological changes (Zeng et al., 2020). A viable approach for phenological phase detection is to build a statistical model for each of the key phenological dates as a regression dependent on a number of spectral, meteorological, and possibly special characteristics. State-of the art statistical models, such as multiple linear regression, principal component regression, and Random Forest regression, were evaluated based on their predictive performance, with the best model selected for further applications (Czernec et al., 2018). Then, the most influential factors driving the phenological dates can be distinguished using feature selection. And, statistical distribution of the predicted phenological dates can be build using resampling techniques (e.g.bootstrap) for each of the phenological dates with the best performing statistical model (Czernecki et al., 2018, Ge et al., 2021). These comparisons will provide new insights into forest structural dynamics and related physical changes leading to improved interpretation of optical satellite observations of boreal forests.
REFERENCES
Berra, E. F., Gaulton, R. (2021). Remote sensing of temperate and boreal forest phenology: A review of progress, challenges and opportunities in the intercomparison of in-situ and satellite phenological metrics. Forest Ecology and Management, 480, 118663.
Calders, K., Schenkels, T., Bartholomeus, H., Armston, J., Verbesselt, J., Herold, M. (2015). Monitoring spring phenology with high temporal resolution terrestrial LiDAR measurements. Agricultural and Forest Meteorology, 203, 158-168.
Campos, M. B., Litkey, P., Wang, Y., Chen, Y., Hyyti, H., Hyyppä, J. and Puttonen, E. (2021). A LongTerm Terrestrial Laser Scanning Measurement Station to Continuously Monitor Structural and Phenological Dynamics of Boreal Forest Canopy. Front. Plant Sci., 11, 606752.
Czernecki, B., Nowosad, J. & Jabłońska, K. (2018). Machine learning modeling of plant phenology based on coupling satellite and gridded meteorological dataset. Int J Biometeorol 62, 1297–1309. https://doi.org/10.1007/s00484-018-1534-2
Cleland, E. E., Chuine, I., Menzel, A., Mooney, H. A., & Schwartz, M. D. (2007). Shifting plant phenology in response to global change. Trends in ecology & evolution, 22(7), 357-365.
Ge, H.; Ma, F.; Li, Z.; Tan, Z.; Du, C. (2021). Improved Accuracy of Phenological Detection in Rice Breeding by Using Ensemble Models of Machine Learning Based on UAV-RGB Imagery. Remote Sens., 13, 2678. https://doi.org/10.3390/rs13142678
Richardson, A.D., Hufkens, K., Milliman, T., Frolking, S., (2018). Intercomparison of phenological transition dates derived from the PhenoCam Dataset V1.0 and MODIS satellite remote sensing. Scientific Reports,8, 5679
Zeng, L.; Wardlow, B.D.; Xiang, D.; Hu, S.; Li, D. (2020). A review of vegetation phenological metrics extraction using time-series,multispectral satellite data. Remote Sens. Environ., 237, 111511.
Pl@ntNet is an existing smartphone and web-based application that allows identifying plant species on close up images. Pl@ntNet has found various uses by citizens learning about species but also by experts in fields such as agro-ecology, education, and land and park management. Available in 36 languages and in 200+ countries, about 200.000 to 400.000 users use Pl@ntNet each day.
Pl@ntNet provides a set of generic functionalities but also services tailored to specific needs. In this presentation we report on the creation of a new project within Pl@ntNet on recognizing cultivated crops on geo-tagged photos. This application is fed by data and photos coming from the European Union’s Land Use and Coverage Area frame Survey (LUCAS). During five tri-annual LUCAS campaigns from 2006 to 2018, nearly 800.000 ‘cover’ photos were collected. The LUCAS cover photos have not been previously published but after anonymization will be released in 2022. Out of these, 330.000 provide a European coverage of (close-up) photos of crops. The protocol for these photos specified that “the picture should be taken at a close distance, so that the structure of leaves can be clearly seen, as well as flowers or fruits”. This enables an opportunity to use authoritative data to improve citizen science tools as they should be valuable for computer vision based applications such as Pl@ntNet.
A total of 215 crop species is included in the European crops project. In a first step, a the current Pl@ntNet deep learning algorithm is used for a forward inference on the LUCAS cover photos to identify crops and ingest those photos into the European crops project with a classification probability >0.8. In a second step, the LUCAS legend and the Pl@ntNet species lists have been matched and aligned. In a third step, this allows the LUCAS cover photos to be used as training data to improve the Pl@ntNet deep learning algorithm . The performance of the identification will be illustrated as a function of crop type and phenology. Issues relevant for the image classification task, such as pictures taken of recently emerged crops or taken after the crop was harvested, scaling issues related to close-up photos vs. field photos, and the need for increased capacity to generalize, will also be highlighted.
Finally, we will discuss various application contexts, detailing three uses cases associated to efficient in-situ data gathering for EO applications, potential citizen science activities related to the Farm to Fork strategy (e.g. educational activities on food and health), as well as methodological developments in the context of evidence reporting mechanisms for the Common Agricultural Policy.
In situ measurements of vegetation structural variables such as plant area index (PAI), leaf area index (LAI), and the fraction of vegetation cover (FCOVER) are needed in agricultural and forest monitoring, as well as for validating satellite products used in a range of downstream applications. Because periodic field campaigns are unable to adequately characterise temporal dynamics, a variety of automated in situ measurement approaches have been developed in recent years.
In this contribution, we investigate automated digital hemispherical photography (DHP) and wireless quantum sensor networks deployed under Component 2 of the Copernicus Ground Based Observations for Validation (GBOV) service. The primary objective is to develop and distribute robust in situ methods and datasets for the purposes of satellite product validation.
A mixture of automated DHP systems and wireless quantum sensor networks were installed at four sites covering deciduous broadleaf forest (Hainich National Park, Germany), Mediterranean vineyard vegetation (Valencia Anchor Station, Spain), wet eucalypt forest (Tumbarumba SuperSite, Australia), and tropical woody savanna (Litchfield SuperSite, Australia). At each site, manual field data collection (including DHP and LI-COR LAI-2000/2200C measurements) was carried out throughout the growing season, enabling us to benchmark the automated systems against established and accepted in situ measurement techniques.
We present findings from each of the field installations, including site-specific deployment considerations, data processing and filtering methods, and benchmarking results. We demonstrate that the automated DHP systems and wireless quantum sensors can provide rich temporal characterisation of vegetation structure, whilst demonstrating good correspondence to traditional measurement approaches (for PAI, initial results show r^2 = 0.78 to 0.91 and RMSE = 0.37 to 0.65). Perspectives on upscaling temporally continuous but spatially limited in situ data, which is a key requirement for validating moderate spatial resolution satellite products, are also discussed.
EO data can improve the timely measurement of agricultural productivity in support of efforts to evaluate and target productivity-enhancing interventions by providing critical information required to stabilize markets, mitigate food supply crises, and mobilize humanitarian assistance. Access to EO, data processing infrastructure as well capacity to develop methods, have improved substantially. However, these are still lacking in smallholder systems that form a large percentage of agriculture in sub-Saharan Africa, where these data are even more critical due to the high dependency on agriculture for livelihood. Recent advances in Machine Learning (ML) and cloud computing, increased access to satellite data offer a new promise but the lack of ground truth labels (particularly crop- type) for training and validation remains one of the biggest impediments to advanced applications of ML for mapping smallholder agricultural systems. This not only limits the development of system-specific models but also limits testing of models developed elsewhere that could result in filling huge data gaps needed to produce products that can support early warning information for decision-making in agriculture and food security.
The project “Helmets Labeling Crops” is a partnership between international and institutions based in 5 African countries creating a publicly accessible unprecedented labeled dataset in an efficient, cost-effective, equitable, participatory approach that will ensure quality and have direct transformational impacts funded by the Meridian Institute under the Lacuna Fund.
This presentation will summarize 1) the partnership and its uniqueness, 2) the main approach to data collection “the Helmets-street level surveys done with GoPro Cameras mounted on motorbike helmets'', 3) our recently developed Street2Sat framework for obtaining large data sets of geo-referenced crop type labels. With over a million images collected so far in Kenya, Uganda, France, and the United States. Using preliminary data from Kenya, we present promising results from this approach and identify the future to improve the method before operational use in 5 countries.
Climate change is shifting natural phenocycles and, combined with ongoing human-induced disturbances, is putting pressure on forest ecosystems through increased frequencies of fires, droughts, degradation events and pests. Current observation technologies like the Eddy covariance technique allow monitoring these effects in terms of forest functioning. However, capacities to monitor the impacts on and changes in forest and vegetation structure, especially vertical structure, remain limited and constrain our physical understanding for using dense remote sensing time series tracking forest dynamics and disturbances.
In our presentation we will first examine current challenges in forest structure monitoring, with focus on support of functional monitoring at local scales to support upscaling of forest structure and change assessments via airborne and satellite sensors. Then, a range of prototype projects will be presented that demonstrate solutions to these challenges using a combination of different novel near-sensing techniques across a range of sites and conditions. This will lead to requirements and specifications for a refined observation strategy and underpin StrucChangeNet that will be introduced with the aims to fill current gaps in systematic and dynamic structural monitoring.
A key feature of StrucChangeNet is the implementation of recent developments in LiDAR and Internet of Things (IoT) technology. Airborne LiDAR (ALS), Terrestrial Laser Scanning (TLS) or Unoccupied Aerial Vehicles Laser SCanning (UAV-LS) have demonstrated the capabilities to measure forest structural attributes like above-ground biomass, leaf area and leaf area density profiles along with precise localisation and trait-retrieval of individual trees. StrucChangeNet will make use of TLS, UAV-LS, as well as new monitoring LiDAR systems to capture these variables and their temporal changes down to the individual tree. Additionally, recently developed passive sensors based on IoT technology that measure multi-spectral canopy transmittance will be employed to assess canopy structural and biochemical properties. This setup will produce rich datastreams for canopy near-real time monitoring. The initial setup of ten globally distributed sites will extend the capacities of existing monitoring sites, with a focus on Eddy covariance equipped sites such as those of the ICOS, TERN and NEON networks. In particular the LiDAR data streams will also be relevant for the recently proposed GEOTREES initiative for forest aboveground biomass estimation.
MetOp-SG is the follow-on to the current, first generation series of MetOp satellites, which is now established as a cornerstone of the global network of meteorological satellites. MetOp-SG is required to ensure the continuity of these essential meteorological observations, improve the accuracy / resolution of the measurements, and also to add new measurements / missions.
The overall MetOp-SG Space Segment architecture consists of two series of satellites (Satellite A and Satellite B), each carrying different suites of instruments and operating in a LEO polar orbit identical to that of MetOp. The planned launches of the first of each series of satellites are scheduled in 2024 and 2025, respectively.
The MetOp-SG Programme is being implemented in collaboration with EUMETSAT. ESA develops the prototype MetOp-SG satellites A and B (including associated instruments) and procures, on behalf of EUMETSAT, the recurrent satellites (and associated instruments). EUMETSAT is responsible for the overall mission, funds the recurrent satellites, develops the ground segment, procures the launch and LEOP services and performs the satellites operations as part of its EPS-SG Programme.
The ESA MetOp Second Generation (MetOp-SG) Programme was approved at the ESA Council Meeting at Ministerial level in Naples in November 2012. The EUMETSAT Polar System – Second Generation (EPS-SG) Programme was fully approved on 24 June 2015.
The Phase C/D was kicked-off in November 2015 with Airbus Defence and Space SAS, France as Prime contractor for the Satellite A series, and with Airbus Space and Defence GmbH, Germany as Prime contractor for the Satellite B series. The Satellite A CDR was passed in 2018 and the Satellite B CDR in 2019. The Satellite A QR was passed in July 2021 and the Satellite B QR is planned for May 2022.
The scope of this paper is to provide an up-to-date overview of the MetOp-SG Programme development status, including the description of the Space Segment architecture, satellites configuration and key features and characteristics, orbit and space to ground communications link.
This paper also provides the up-to-date status of the Satellites PFM A and B AIT campaigns that were initiated in 2019 and 2020 respectively.
The MetOp-SG instruments status are described in separate papers.
METimage is a cross-purpose, medium resolution, multi-spectral optical imaging radiometer for meteorological applications onboard the MetOp-SG satellites, fulfilling the VII mission of the EPS-SG programme. The primary objective of the VII mission is to provide high quality imagery data for global and regional weather forecast and climate monitoring.
METimage measures the solar backscattered radiation and thermal radiance emitted by the Earth in 20 spectral bands from 443 nm to 13.345 µm.
We provide an overview over the instrument design and the main subsystems consisting of the calibration assemblies, the scan and de-rotation assemblies, the telescope, and the cryogenic subsystem. The latter includes the assemblies for spectral band and channel separation as well as the three focal plane assemblies.
We discuss the main design drivers and present the expected performance.
The Infrared Atmospheric Sounding Interferometer New Generation (IASI-NG) is a key payload element of the second generation of European meteorological polar-orbit satellites (METOP SG) dedicated to operational meteorology, oceanography, atmospheric chemistry, and climate monitoring.
CNES (Centre National d ’Etudes Spatiales) is in charge of IASI-NG programme based on an instrument concept proposed and developed by Airbus Defence and Space. It will continue and improve the IASI mission in the next decades (2020-2040) in the field of operational meteorology, climate monitoring, and characterization of atmospheric composition related to climate, atmospheric chemistry and environment. The performance objective is mainly a spectral resolution and a radiometric error divided by two compared with the IASI first generation ones.
The measurement technique is based on wide field Fourier Transform Spectrometer (operating in the 3.5 - 15.5 µm spectral range) based on an innovative Mertz compensated interferometer to manage the so-called self-apodization effect and the associated spectral resolution degradation.
Environment qualification and performance tests on flight sub-systems has been completed successfully. This includes the performance vacuum tests on the complete Focal Plane and Cooler Assembly and the performance vacuum tests on the assembly made of the aligned flight Interferometer with its Laser Metrology. The obtained results show the complete end to end processing of the acquired interferograms taking into account all metrology data.
An exhaustive microvibration campaign has also been performed on EM Instrument and allowed to have a fine assessment of the instrument sensitivity and to correct unforeseen parasitic effects. Conducted EMC qualification and first functional verifications with representative software were also done on EM Instrument.
The flight model Instrument is now fully integrated and aligned and is currently under mechanical qualification. The final thermal vacuum test will start the first trimester of 2022 after functional test verification.
We present here the synthesis of the main milestone achieved on EM Instrument and flight sub-systems, as well as the progress and available results on complete flight model Instrument.
Sentinel-5 is a hyperspectral imaging instrument that will be embarked on the MetOp Second Generation satellites (MetOp-SG) with the fundamental objective of monitoring atmospheric composition from low Earth orbit. The instrument consists of five imaging spectrometers that sense the solar spectral radiance backscattered by the earth atmosphere in eight defined spectral bands in the range from UV (270nm) to SWIR (2385nm). Processing of Sentinel-5 data will allow obtaining the distribution of important atmospheric constituents such as ozone, methane, nitrogen dioxide, sulphur dioxide, etc. on a global daily basis and at finer spatial resolution than its predecessor, the GOME-2 instrument embarked on the first generation of MetOp satellites. Sentinel-5 is part of the Space Component of the Copernicus program, a joint initiative by ESA, EUMETSAT and the European Commission. This paper presents an overall description of the Sentinel-5 instrument, its calibration and data processing.
The EUMETSAT Polar System – Second Generation (EPS-SG) will further improve observational inputs to Numerical Weather Prediction models in continuation and enhancement of the EUMETSAT Polar System (EPS) first generation service that have been exploited by EUMETSAT since 2006. Starting in 2024/2025, the EPS-SG System will deploy three successive pairs of Metop-SG A and Metop-SG B satellites equipped with complementary instruments to provide 21 years of service. The satellites will fly together on the same mid-morning polar orbit as the current Metop satellites in-orbit and will expand by 20+ years the climate data records initiated with EPS. Metop-SG A is an atmosphere sounding and imaging satellite equipped with a suite of microwave and hyperspectral infrared instruments (MWS and IASI-NG) and two advanced optical imagers (METimage and 3MI). Metop-SG A also carries the Copernicus Sentinel-5 spectrometer for measurements of trace gases. Metop-SG B is a microwave imaging satellite delivering radar observations of ocean surface wind and soil moisture (SCA) and passive microwave observations of precipitation and ice clouds (MWI and ICI). Metop-SG B also carries a receiver supporting the ARGOS localisation and data collection mission. Both satellites carry a Global Navigation Satellite System (GNSS) radio-occultation instrument (RO) for limb sounding of temperature and humidity at high vertical resolution. The EPS-SG system is developed in cooperation with the European Space Agency (ESA), Centre National d’Etudes Spatiales (CNES) and Deutsches Zentrum für Luft- und Raumfahrt (DLR). EUMETSAT procures all launch services, develops the full ground infrastructure and integrates and validates the full system. EUMETSAT then operates and exploits the system and develops additional products using new algorithms. EPS-SG is Europe’s contribution to the Joint Polar System (JPS) shared with the National Oceanic and Atmospheric Administration (NOAA) of United States. The presentation will provide a short overview of the programme, its achievements, current status and outlook.
EnMAP, the Environmental Mapping and Analysis Program, is the spaceborne German hyperspectral satellite mission. On behalf of the Geman Space Agency at DLR, the satellite is being developed by OHB system AG and the ground segment is being provided by DLR. Science lead institute is GFZ Potsdam.
As of writing, the satellite has now successfully finished its environmental test campaign and is being prepared to be launched in April 2022 with a Falcon 9 from Florida. In this talk, we give an overview on the current status of the mission, its capabilites and hopefully deliver first results of the satellite's first days in orbit. We will also give an outlook on the planning for the commissioning phase, the operational phase and collaborations.
The launch of the spaceborne imaging spectroscopy mission EnMAP (Environmental Mapping and Analysis Program; www.enmap.org) is scheduled for April 2022.
The presentation will detail the status and planning of EnMAP operations. The status covers on the one hand the realized system to perform operations and on the other hand the results of the Launch and Early Orbit Phase (LEOP) (0.5 months) and, in particular, the first insights of the Commissioning Phase (CP). The planning covers the complete activities of the CP (5.5 months) and the subsequent routine phase (54 months) with the provision of quantitative imaging spectroscopic measurements substantially improving remote sensing standard products and allowing advantageous user-driven information products to be established.
The objective of EnMAP is to measure, derive, and analyze quantitative diagnostic parameters describing key processes on the Earth’s surface focusing on issues related to soil and geology, agriculture, forestry, urban areas, aquatic systems, ecosystem transitions and associated science.
The spectral range of EnMAP covers 420 nm to 2450 nm based on a prism-based dual-spectrometer with a spectral sampling distance between 4.8 nm and 8.2 nm for the VNIR (Visible and Near Infrared; 450 nm to 1000 nm) and between 7.4 nm and 12.0 nm for the SWIR (Shortwave Infrared; 900 nm to 2450 nm). An on-board doped Spectralon sphere enables a spectral accuracy of better than 0.5 nm in VNIR and 1.0 nm in SWIR. The target signal-to-noise ratio (SNR) is 500:1 at 495 nm and 150:1 at 2200 nm (at reference radiance level representing 30% surface albedo, 30° Sun zenith angle, ground at sea level, and 40 km visibility with rural atmosphere). The signal is fed into two parallel amplifiers with different gains for each of the two detectors to have a large dynamic range. Sun calibration measurements with an on-board full-aperture diffuser enable a radiometric accuracy of better than 5%. Additional measurements, e.g. for non-linearity and closed shutter measurements for subtraction of dark signal, complement the calibration. Each detector array has 1000 valid pixels in spatial direction and, with a geometric resolution 30 m x 30 m, a swath width (across-track) of 30 km is realized. A swath length (along-track) of 5000 km, split to several observations, is reached per day. The repeat cycle of 398 revolutions in 27 days combined with an across-track tilt capability of 30° enables a target revisit time of less than 4 days. And each region is viewable under an out-of-nadir angle of at most 5°. The local time of descending node is 11:00.
The satellite, which is realized by OHB System AG, will be operated by the ground segment. DLR’s Earth Observation Center (EOC) together with the German Space Operations Center (GSOC) are responsible for operations. Mission management is covered by DLR’s Space Agency. Control and command of the satellite based on flight operations procedures using real-time and dumped data is performed via S-band ground stations for telemetry and telecommand data in Weilheim (Neustrelitz as backup) and in addition Inuvik, O’Higgins, and Svalbard for non-nominal operations. Proposals, observations, and associated research are presented by an interactive map supporting the establishment of a world-wide user network. In case of tasking conflicts, issued observations are prioritized not only according to static information like the underlying priority of the request, but also based on historical and predicted cloud coverage information taking satellite constraints such as power and storage into account. All information is incorporated into the mission timeline immediately on reception and feedback to users on planned observations is provided whenever the planning state of an observation changes. Required orbit maneuvers, for orbit maintenance and collision avoidance, and contacts with X-band ground stations for instrument data reception in Neustrelitz (Inuvik as backup) are also considered in the mission timeline and as such transmitted to the satellite during S-band passes. Together with orbit and attitude data of the satellite, image products from received instrument data during X-band passes are fully-automatically generated at three processing levels, and long-term archived in tiles of 30 km x 30 km. A catalogue allows users to search and browse all products based on the standardized protocols CSW (catalogue service for the web) and WMS (web mapping service). Because of the necessary various processing options, each product is specifically generated for each order and delivered using SFTP (secure file transfer protocol) to the scientists.
Level 1B products are corrected to Top-of-Atmosphere (TOA) radiances including defective pixel flagging, non-linearity correction, dark signal (and digital offset) correction, gain matching, straylight correction, radiometric/spectral referencing, radiometric calibration, and defective pixel interpolation. Level 1C products are orthorectified to a user selected map projection and resampling model. The physical sensor model is applied by the method of direct georeferencing with a correction of sensor interior orientation, satellite motion, light aberration and refraction, and terrain related distortions from raw imagery. The products have a geolocation accuracy of 30 m with respect to a reference image based on selected Sentinel-2 Level 1C products having an absolute geolocation accuracy of 12.5 m. Level 2A products are compensated for atmospheric effects with separate algorithms for land and water applications. For the land case the units are expressed as remote sensing, namely Bottom-of-Atmosphere (BOA), reflectances. For the water case the units are normalized water leaving remote sensing reflectance or subsurface irradiance reflectance based on user selection. A pixel classification (e.g. land-water-background, cloud) is performed and aerosol optical thickness, columnar water vapor, and adjacency corrections are treated accordingly. At all processing levels per-pixel quality information and rich metadata are appended online.
Offline quality control, e.g. based on pseudo invariant calibration sites (PICS), maintenance and fine tuning of the image processing chain, and calibrations of the instrument during operations complete the ground segment. The independent validation of products, e.g. based on already established calibration/validation procedures, sites, networks, and products of other missions, is performed by the science segment led by the GFZ German Research Centre for Geosciences. All elements of the mission are characterized and calibrated, technically verified and operationally validated until launch.
EnMAP will be launched by a Falcon 9 of SpaceX from Florida, USA. The subsequent LEOP covers the first contact with satellite after separation, setting up telemetry and telecommand communications, continuous monitoring of health status, checkout and configuration of all platform functions, e.g. achievement of nominal power and thermal configuration, activation and calibration of sensors and actuators, e.g. for attitude and orbit determination and control, and acquisition of required orbital parameters. The subsequent CP covers the activation of the instrument data storage and all payload functions including the first image acquisition, downlink, and processing. The focus is first on radiometric, second on spectral in-flight calibration using all on-board equipment and taking pre-flight characterization and calibration into account, and in parallel on geometric characterization using Earth observations. The results lead to the optimization of the radiometric and geometric processors at first and then to the atmospheric processors. These activities are iterated and complemented by continuous instrument monitoring and quality control. Finally, based on end-to-end experiences the user interfaces from observation planning to product delivery and the complete processing chains are fine-tuned. The objective of the CP is to put space and ground segment into nominal routine operations with detailed in-orbit performance analyses of on-board and on-ground functionalities resulting in product approval for users which is expected for October 2022. The subsequent routine phase keeps the mission working in nominal operations based on ground procedures and by appropriately handling non-nominal operations, if required. All elements are supervised, the satellite is kept in the required orbit, data are acquired and dumped according to the requests and following the planned mission timeline, and products are processed and delivered to users. The offered operational services are complemented by a service team offering expert advice on the exploitation of EnMAP.
EnMAP operations are planned to be continued until April 2027.
We present the measurement concepts and results of the pre-flight characterization and calibration measurements of the EnMAP HyperSpectral Imager (HSI). The final measurement campaign of the instrument was performed in 2020 prior to the integration of the instrument onto the platform. Based on specific design of the HSI the measurement concept was designed to be performed in air with subsequent corrections applied for in vacuum performance estimation. The measurement concepts and setups are demonstrated to be of excellent quality with the obtained accuracies exceeding the required values significantly. Thus, the choice of ambient conditions for calibration and the design of the optical setups and equipment are confirmed. We show that the as built HSI exceeds the expectations in almost all important parameters such as SNR, NEdL, MTF and distortions indicating that we can expect very good data quality in flight. Calibration coefficients for radiometry, geometry and spectral registration were obtained at high quality allowing for a very good reference point for the initial in orbit calibrations during the commissioning phase. The respective on-board devices and concepts for spectral (spectral signature-based multipoint registration) and radiometric calibration (full aperture diffuser) have been validated and characterized and will allow to transfer and maintain the calibration of the HSI to the final mission environment. With a planned launch date for EnMAP in April 2022 we hope to be able to present first light from the mission to confirm the on-ground results.
The Environmental Mapping and Analysis Program (EnMAP) is a spaceborne German hyperspectral satellite mission that aims at monitoring and characterizing the Earth’s environment on a global scale. EnMAP core themes are environmental changes, ecosystem responses to human activities, and management of natural resources. In 2021 major milestones were achieved in the sensor and satellite preparation which is by end-2021 in the final acceptance review and pre-launch phase, with a launch window opening April 2022 (Fischer et al., ESA LPS 2022). Accordingly, the mission science support shifted from science development to pre-launch and launch support.
The EnMAP science preparation program has been run for more than a decade to support industrial and mission development, and scientific exploitation of the data by the user community. The program is led by the German Research Center for Geosciences (GFZ) Potsdam supported by several partners and is funded within the German Earth observation program by the DLR Space Agency with resources from the German Federal Ministry for Economic Affairs and Energy (BMWi). In 2020 a new 3+1-year project phase started during which specific activities are performed at the GFZ Potsdam together with the four project partners Humboldt-University (HU) Berlin, Alfred-Wegener Institute (AWI) Bremerhaven, Ludwig Maximilian University (LMU) Munich, and University Greifswald. These activities focus on the preparation for the scientific exploitation of the data by the user community as well as mission support during the commissioning phase and the start of the nominal phase, supported by the EnMAP Science Advisory Group.
In this presentation, we aim at providing an update of the current science preparation activities performed at GFZ. This includes an update of the data product validation activities focusing on an independent validation of the EnMAP radiance and reflectance products. For smooth and efficient validation especially during the commissioning phase, a semi-automatic processing chain is being developed (EnVAL), which streamlines the validation sites and in-situ data management as well as the validation tasks and report generation. Also, an update on new resources in the online learning initiative HYPERedu will be presented. In particular, the first Massive Open Online Course (MOOC) on the basics of imaging spectroscopy titled ‘Beyond the Visible – Introduction to Hyperspectral Remote Sensing’ was successfully opened in November 2021. An update will be further provided on the status of algorithms included in the EnMAP-Box related to data pre-processing and derivation of geological and soil mapping. It includes the EnMAP processing tool (EnPT) that is developed as an alternative to the processing chain of the EnMAP ground segment and provides free and open-source features to process EnMAP Level-1B data to Level-2A bottom-of-atmosphere (BOA) reflectance, and the EnMAP geological Mapper (EnGeoMap) and Soil Mapper (EnSoMap) for users in bare Earth and Geosciences applications. Finally, a background mission plan is developed as mission internal to fully exploit the resources of the satellite in terms of functionalities and/or capacities when there are resources available after all user requests have been processed. It can be used to generate time series databases interesting for the user community and anticipate future user needs, or to prototype and validate new mission strategies, such as large mosaicking demonstrations and/or synergies with other hyperspectral missions.
The quantification of agricultural traits is essential to evaluate seasonal dynamics of crops, in particular to assess the efficiency of management measures or the effects of changing climatic conditions. Spaceborne imaging spectroscopy data from recently launched or upcoming missions allows the spatiotemporally-explicit monitoring of crop traits, such as leaf area index (LAI), leaf chlorophyll content (Cab), leaf dry matter content (Cm), leaf carotenoid content (Cxc) or leaf water content (Cw), as well as upscaled canopy-level variables and average leaf inclination angle (ALIA). The unprecedented high-dimensional data streams call for efficient, fast and easy-to-use evaluation tools to obtain key agricultural information over large and heterogeneous cultivated areas. In this respect, the Agricultural Applications (Agri-Apps) have been developed within the freely available EnMAP-box 3.9, which is provided within in the framework of the German Environmental Mapping Analysis Program (EnMAP) mission. The Agri-Apps specifically provide three main retrieval tools, which employ different estimation strategies: (1) empirical algorithms based on parametric regressions, such as the Analyze Spectral Integral (ASI), (2) physically based models, using radiative transfer models (RTM), such as the Plant Water Retrieval (PWR) tool, and (3) hybrid tools, which combine machine learning (ML) regression algorithms with RTMs, such as the hybrid ANN tool. The ASI tool (1) computes integral indices from continuum removed spectra and combines these into three-band rasters of Cxc, Cab and Cw. These three different leaf biochemical constituents then are quantified applying specific calibration functions and are displayed as RGB-images. The PWR tool (2) uses an efficient algorithm to quantitatively extract canopy water content information directly from single hyperspectral signatures or from spectroscopic images applying the Beer-Lambert law. The hybrid ANN tool (3) uses an artificial neural network (ANN) algorithm, which is trained over a simulated data base generated by the PROSAIL model.
The objective of the present study was to test the three tools regarding their suitability, retrieval performance and practical applicability for the upcoming EnMAP mission. To obtain reliable product evaluations, in situ data of crop traits (winter wheat, winter barley and corn) were collected at the Munich-North-Isar (MNI) and Irlbach test sites in Bavaria, Germany, during the growing seasons of 2017, 2018, 2020 and 2021. To demonstrate the dynamics of the crop growth cycle, a series of crop trait simulations was generated with the ASI, PWR and the hybrid ANN tools over both test sites. Thereby, none of the tools was specifically calibrated for the sites, so that the transferability and general applicability of the retrieval algorithms could also be evaluated. Accuracy of the algorithms was evaluated against in situ reference data, leading to best root mean square errors (RMSE) of 8 µg/cm² for Cab and RMSE = 1.0 m²/m² for LAI retrieval (hybrid ANN tool), RMSE = 0.03 cm for Cw using the PWR tool, and RMSE = 0.01 cm for canopy Cw using the ASI tool. Though the latter failed for Cm and Cab retrievals, most of the results suggest a good to very high retrieval performance. In summary, the hybrid ANN tool outperformed the two others which may be attributed to its property to combine physics-awareness with the capabilities of learning algorithms. Second, mapping capabilities of the three tools are demonstrated on a time-series from 2021 consisting of five images from the spaceborne DLR Earth Sensing Imaging Spectrometer (DESIS), of three acquisitions by the PRecursore IperSpettrale della Missione Applicativa) (PRISMA) of the Italian Space Agency (ASI) and of one airborne Acquisition by the The Airborne Visible-Infrared Imaging Spectrometer - Next Generation (AVIRIS_NG). By combining data from all three instruments, it was possible to compile one of the first adequately dense time-series of (predominantly) spaceborne hyperspectral scenes over agricultural land. Mapping provided high fidelity of the tools. The hybrid ANN tool is specifically demonstrated on the AVIRIS-NG scene acquired in the context of the ESA CHIME campaigns 2021 over the Irlbach site. Figure 1 shows the resulting maps over the agricultural area on 30 May, 2021. In general, the map shows plausible estimates for the observed phase of the season, with crop growth patterns well reflected by the estimated traits. The intra-field distributions are relatively narrow and spatially consistent, which can be seen as an indirect measure for the accuracy of the retrieval tool. Overall, our results show that the Agri-Apps of the EnMAP-Box and in particular the hybrid ANN tool will be suited to efficiently process data from the EnMAP satellite mission in a quick and automated, user-friendly, transferable and generally applicable way. The Agri-Apps of the EnMAP-Box are ready to provide highly relevant agricultural products from spaceborne hyperspectral data as soon as EnMAP data becomes available from 2022 onwards.
Figure 1: Estimates of LAI, Cm, Cab and ALIA from AVIRIS-NG airborne data using the hybrid ANN tool of the EnMAP-Box Agri-Apps, Irlbach test site, Germany.
The German Environmental Mapping and Analysis Program (EnMAP) is an imaging spectroscopy satellite mission aiming at monitoring and characterizing the Earth’s environment on a global scale. As part of the EnMAP mission preparation the EnMAP-Box 3 has been developed as a free and open source (FOSS) plug-in for QGIS that is designed to process imaging spectroscopy data in a GIS environment. The main development goals are (i) advanced processing of hyperspectral remote sensing data by offering state-of-the-art applications and (ii) enhanced visualization and exploration of multi-band remote sensing data and spectral libraries in a GIS environment. Therefore, the algorithms provided in the EnMAP-Box 3 will also be of high value for other multi- and hyperspectral EO missions. The Python-based plug-in bridges and combines efficiently all advantages of QGIS (e.g. for visualization, vector data processing), packages like GDAL (for data IO or working with virtual raster files) and abundant libraries for Python (e.g. scikit-learn for EO data classification and or PyQtGraph for fast and interactive chart drawing). It consists of a (i) graphical user interface (GUI) for hyperspectral data visualization and spectral library management, (ii) a set of advanced general and application-oriented algorithms, and (iii) a high-level application programming interface (EnMAP API). The EnMAP-Box can be started from QGIS or stand-alone, and is registered in the QGIS plug-in repository.
The EnMAP-Box is a QGIS plugin with a separate GUI. Typical tasks like raster or vector file management and layer styling follow the same principles and look-and-feel, known from QGIS. They are extended by more specific requirements for imaging spectroscopy data such as spectral library integration. In contrast to QGIS and other GIS software that have a single map view, the EnMAP-Box follows the concept of having multiple, linkable views on the data sources at the same time next to each other. Currently, we offer map views for visualizing raster and vector data as well as spectral views for visualizing image pixel profiles and existing spectral libraries entries, for building new spectral libraries, or for performing spectral processing with spectra. Views can interact in various ways with each other, for example (i) the spatial position and the scale of map views can be synchronized, (ii) the selected image pixel profile of a map view can be visualized in a spectral view together with other image and library spectra, or (iii) the location of previously collected image profiles can be visualized as a point layer inside a map view. Besides such data management and visualization functionality, the EnMAP-Box provides the typical yet often more elaborated interactive tools for data exploration and preparation. Here, easy-to-use workflows for machine learning classification and regression and a raster image calculator, which offers the full flexibility of NumPy operations, shall be mentioned. Moreover, domain specific interactive applications (for agriculture, geology, soil or forest science and hydrology) have been contributed by several project partners as part of the mission preparation activities. These include e.g. radiative transfer models for forward/backward simulation of leaf and vegetation canopy reflectance, suites for analyzing soil and geology spectra, or for mapping water constituents.
EnMAP-Box algorithms are developed based on the QGIS processing framework and can be used from the GUI, stand-alone Python scripts and the command line. The EnMAP-Box API enables convenient raster data IO for memory efficient block-wise image processing. The available algorithms can be used inside the QGIS Model Designer to build complex workflows covering i) machine learning (i.e image classification, regression and unmixing) and accuracy assessment, ii) various spectral resampling options, iii) spatial and spectral filtering and more.
In summary, the EnMAP-Box 3 is a powerful FOSS plugin for QGIS with a long list of available image processing and analysis tools and applications. Its full strength will be needed, when more spaceborne imaging spectroscopy data becomes available over the coming years. The Toolbox is constantly improved and extended and is a key element of the HYPERedu learning platform for imaging spectroscopy.
We present the concept of the EnMAP-Box 3 together with two typical application examples that highlight its user-friendly yet powerful algorithms.
1. Monitoring Intertidal and coast
With the establishment of the National Park Schleswig-Holstein Wadden Sea, being a UNESCO World Heritage since 2011, the monitoring of important environmental parameters became necessary for the verification of the status and the planning of further development. Finally, a coordinated monitoring program could be agreed upon with the Wadden Sea riparian states The Netherlands, Germany and Denmark. The roughly 80 environmental parameters include sediment and seagrass abundance, both parameters that have to be collected for numerous other guidelines and have repercussions on the other biotopes. Sediment type, currents and exposure times are important factors for the biology and productivity of intertidal flat areas.
Seagrasses are the only flowering plants in the European Wadden Sea. They occur in the intertidal zones and constitute an important part of the food web due to their high productivity. Besides serving as a food source for migrating geese, seagrasses also provide a habitat for a variety of other species. In the North Frisian Wadden Sea, where the largest seagrass stocks can be found, seagrass meadows have been regularly monitored since 1994 using aerial surveys. After a significant decline in the 1930s, monitoring results have shown a steady increase in stock size until 2020. In almost all other areas of the world, however, seagrass is declining.
One main goal of the current work is to provide comparable measurements from remote and the ground. Different techniques are used, each showing advantages and disadvantages such as spatial inaccuracy (airborne), spatial coverage (ground based), individual interpretation (airborne, ground based) or spectral similarity of surfaces and atmospheric influences (satellite data). Thus, the comparison of the different data sets is difficult.
2. Satellite Acquisitions
Methods for detecting different habitats in intertidal flat areas based on satellite data have been developed since many years. They are currently applied in test phase for the operational monitoring of sediment and seagrass meadows. The distribution of sediment types - sand, mixed and muddy sediments - provide valuable information about potential distribution of benthic organisms. The changes of sediment and morphologic structures provide information about the adjustment of the system to external influences, such as climate change. For the detection of sediments, linear spectral unmixing is used and for the identification different vegetation indices are applied. Spectral and textural features are used within decision trees to classify and assess the different habitats. Water coverage is permanently changing on intertidal flats and therefore, the influence of water coverage as well as different degree of wetness has to be taken into account from the spectral signal of the target surfaces. Data availability is limited due to pre-conditions of low tide and low cloud coverage, but has improved since the launch of Sentinel-2 A and B. Satellite data are used in parallel to long-term monitoring programs since several years in order to assess the differences of the acquisition techniques, so that time series can be continued - and adjusted - for future programmes.
3. Verification methods for Seagrass
a. Airborne: Mapping by observation
For more than 30 years, an airborne low altitude survey of the Wadden Sea with observers has been carried out for rapid assessment. The procedure of airborne mapping provides maps of seagrass and green algae three times a year. The technique leads to a strong simplification of the observed occurrences, size and position inaccuracies, especially on outer tidal flats with low structural features.
Today's satellite image analyses are qualitatively much more accurate than the airborne mapping, but the new remote sensing method cannot distinguish spectrally between green algae and seagrass stands or provide indications of their species diversity. So far, the three annual aerial surveys are still used to determine the maximum spread - the characteristic value for the long-term population development - and intra-annual development over time.
b. Ground truth existing: References by estimation
In addition to airborne mapping, surveys of seagrass cover on the bottom is conducted. Due to the size of the area, only 1/6 of the full SH Wadden Sea area can be mapped within one year. Besides the surrounding of the seagrass meadows along the 5% coverage limit, transects crossing the meadows provide a state of the varying stand density, during which vital parameters, among others, are also observed. The data are used for verification of both aerial observations and satellite image analyses.
In addition, specific transects are captured that are used to optimize image classification or transform image classes into real values to be reported for monitoring. For this transects, photos are taken in different viewing angles and directions for later visual inspection.
c. Ground truth evolution
The assessments performed in the field are often subjective and can lead to inaccurate results due to the difficult conditions in the field.
The data collection during a transect survey leads up to 9 km through silty mudflats under often variable, sometimes harsh weather and light conditions. The coverage estimates are assessed by the mappers, yet the low tide periods are too short to make repeated observations for verification. Therefore, a measurement independent of the observers is desired.
In order to support ground truth field mapping, an automated method was developed using the RGB control images taken during a transect survey. For this, a series of images is shot in a top-down view, covering the area of the transect point. These are then split by a segmentation algorithm (SLIC). A neural network was trained to classify the resulting sub images into either three seagrass or several background categories. The seagrass segments are subjected to three independent methods to determine the coverage: a ‘simple’ method assigning the different seagrass classes a flat coverage percentage; the Green Leaf Index (GLI), a vegetation index utilizing the characteristics of light reflected by vegetation without having access to infra-red; and ‘Otsu’, an algorithm to differentiate between light and dark areas. The interpretation of the photos has to cope for the same problems as the remote acquired data: Each of these methods have different advantages and disadvantages in the presence of disruptive factors like water coverage, reflections, or unfavourable light conditions. Finally, the resulting relative coverage area of the transect point is given by the mean of the result of all contributing control images.
d. Measurements by AI (Ground truth and Drones)
In the future, moving to drones for image taking might increase efficiency, since it would be possible to cover larger areas in less time. It remains to be tested if the required image quality can be reliably produced in the often harsh weather conditions of the Wadden Sea, and if the proposed methods produce credible results even in the absence of parameters that can only be measured by an observer on the ground.
4. Future Reporting including EO
A crucial question in the use of remote sensing for legally relevant information is the transformation of image classification into real values - combined with information on the size of the uncertainty. In order to assess the transferability of assessment from image to image, the absolute reference provided by the photo interpretation as ground reference is an important step forward.
The combination of the various monitoring methods into a new permanent monitoring system and the development of a largely automated data flow from the satellite image to the reporting product is our mid-term goal for a reliable monitoring of the unique Wadden Sea and its habitats.
Bibliography
Duffy, James & Pratt, Laura & Anderson, Karen & Land, Peter & Shutler, Jamie. (2017). Spatial assessment of intertidal seagrass meadows using optical imaging systems and a lightweight drone. Estuarine, Coastal and Shelf Science. 200. 10.1016/j.ecss.2017.11.001.
Gade, K., Stelzer, K. & J. Kohlus (2013): Towards an Improved Classification System of Intertidal Flat Surfaces Based on Satellite Optical and Radar Data. Vortrag und abstract (p. 63), 33rd EARSeL Symposium 3. - 6.June 2013, 6th Workshop on Remote Sensing of the Coastal Zone, Matera, It..
Kohlus, Jörn; Stelzer,Kerstin; Müller, Gabriele; Smollich, Susan (2020): Mapping seagrass (Zostera) by remote sensing in the Schleswig-Holstein Wadden Sea, In: Estuarine, Coastal and Shelf Science; Spezial Issue: Asmus,Ragnhild; Schueckel, Ulrike; Eskildsen,Kai; Ricklefs, Klaus; Garthe, Stephan: From single ecological interactions to holistic assessments of coastal habitats in the Wadden Sea.
https://doi.org/10.1016/j.ecss.2020.106699
Müller, G, Stelzer, K., Smollich, S., Gade, M., Adolph, W., Melchionna, S., Kemme, L., Geißler, J., Millat, G., Reimers, H-C., Kohlus, J., Eskildsen, K. (2016): Remotely sensing the German Wadden Sea - a new approach to address national and international environmental legislation, In: Environmental Monitoring and Assessment, 188(10), 1-17; DOI 10.1007/s10661-016-5591-x. The article is available electronically
Reus, G. et al., "Looking for Seagrass: Deep Learning for Visual Coverage Estimation," 2018 OCEANS - MTS/IEEE Kobe Techno-Oceans (OTO), 2018, pp. 1-6, doi: 10.1109/OCEANSKOBE.2018.8559302.
Stelzer, K., Gade, M., Adolph, W. & J. Kohlus (2013): DeMarine-Environment - Suitability of RapidEye Data for Monitoring of the Wadden Sea. Vortrag und abstract, 5. RESA Workshop 20.03./ 21.03.2013 in Neustrelitz am Standort des DLR Neustrelitz.
Key information derived from multi-source optical imagery for supporting coastal territories management
Konrad Rolland, Anaïs Teissonnier, Anne Colliez, Emile Naiken
Collecte Localisation Satellite, PIGMA, région Nouvelle Aquitaine, DINAMIS
Coastal areas have been developing continuously for the past 40 years in terms of urban planning, demographics and economics; their attractiveness will continue in light of the latest prospective studies. A fragile, highly attractive area that drives the economy, the coastline is a geographical space where specific planning and management policies are deployed.
The imperatives of territorial recomposition and reconversion, whether driven by a logic of adaptation to the risks of climate change or a response to social and environmental changes, force us to question new models for thinking about and inventing a more resilient coastline. In a context of strong residential, recreational and tourist demands, of global changes in society, of increasing climatic and environmental challenges, and in order to succeed in the transition, it is necessary to rely on a detailed knowledge of developments in order to anticipate future changes and risks.
Within this framework, CLS has established a multi-date (1980s to 2020), accurate, reliable and homogeneous large-scale land use/Land cover database for the Nouvelle Aquitaine region (84,000 km² and a coastline of over 1,000 km). This complex and innovative project covers the entire spectrum of land observation, from the acquisition of image data to the analysis of the maps produced. The objective is to understand the signals of change in the territory and to anticipate future changes.
To build up this base, we researched, processed, and combined different image sources (panchromatic and infrared aerial images, but also VHR satellite images) but also DTM. We defined a production methodology in line with the needs of the territory and compatible with the timeframe. This solution used a mixed approach between automatic data and image processing and photointerpretation. It also made it possible to produce, for example, the latest 2020 land use/land cover databases based on low-cost satellite sources in a few months with reliability levels > 90%.
The quality and temporality (5 dates) of the land use/ land cover data allow us to establish scenarios of the state of the territory in 10, 20 or 50 years according to climate change models.
For this conference, we will present the methodology implemented but also the main results of the project to demonstrate how multi-source land observation data can be used in combination with other business data to understand and manage the development of a coastal territory.
Coastal marine environments, being invaluable ecosystems and host to many species, are under increasing pressure caused by anthropogenic impacts such as, among others, growing economic use, coastline changes and recreational activities. A continuous monitoring of those environments is of key importance for the identification of natural and manmade hazards, for an understanding of oceanic and atmospheric coastal processes, and eventually for a sustainable use of those vulnerable areas.
The joint Sino-European project “Remote Sensing of Changing Coastal Marine Environments” (ReSCCoME), as part of ESA’s DRAGON 5 Programme, addresses research and development activities that focus on the way, in which the rapidly increasing amount of high-resolution EO data can be used for the surveillance of marine coastal environments, and how EO sensors can detect and quantify processes and phenomena that are crucial for the local fauna and flora, for coastal residents and local authorities. We will present results from three ongoing ReSCCoME research activities, during which synthetic aperture radar (SAR) data are being used for different monitoring purposes in vulnerable costal marine environments.
Within the first research activity a classification scheme for sediments and habitats on intertidal flats in the German Wadden Sea was developed, whose feature set consists of the Freeman-Durden (F) and Cloude-Pottier (C) decomposition components, the Double-Bounce Eigenvalue Relative Difference (D) parameter, and two parameters derived from elements of the Kennaugh matrix (K). These feature sets are used as input for a Random Forst (RF) classification, and hence, the classification scheme is abbreviated FCDK-RF. Fully polarimetric SAR data acquired at L-, C- and X-band were used, and a comparison of the classification results with reference data obtained from optical data and field campaigns revealed that the FCDK feature set has good potential for the identification of sandy and mixed sediments on intertidal flats, even when they merge into, or mix with, each other, and for the detection of bivalve beds. A simplified FCDK-RF scheme for the use of dual-co-pol SAR data showed lower accuracies in the discrimination of mixed and sandy surfaces, but was well suited for the detection of bivalve beds.
The second research activity focuses on the impact of large offshore wind farms on the local and regional wind climate, with special emphasis on the wind farm wake effect. The wake region on the downstream side of a wind farm is typically characterized by reduced wind speeds and increased turbulence intensities lasting for tens of km. Within the coastal zone, wake effects can be mixed with other effects, e.g., horizontal wind speed gradients caused by the roughness transition between land and sea. Based on a large archive of wind maps retrieved from Sentinel-1 A/B SAR-C and Envisat ASAR scenes of the North Sea and the China Seas, sector-wise mean wind speeds were analyzed upstream vs. downstream of large offshore wind farms. SAR acquisitions before and after the wind farm construction were analyzed separately in order to quantify the velocity deficits caused by wind farms. A correction for horizontal wind speed gradients was applied to take the ‘background’ wind speed variability into account. Further, a correlation was found between the velocity deficits and the wind farm capacity per km², with larger wind farms leading to more pronounced velocity deficits.
Finally, the third research activity aims at different statistical analyses based on visual inspections of SAR data with respect to imprints of marine oil pollution. More than 2000 Sentinel-1 SAR-C and Envisat ASAR scenes of the Western Java Sea were analyzed and the ‘normalized oil pollution’ density was defined, which takes into account the SAR image coverage and local wind speed, both of which may affect the total number of oil spills observed at a certain location. This density was highest along the main shipping routes and at major oil production sites. Approximately half of the spills were smaller than 1 km², though spills of more than six times that size were also found. Since visual observations are always subjective, results from independent operators examining the same set of SAR images were compared and showed good qualitative agreement. The observed quantitative differences became smaller, when only SAR images acquired at higher wind speeds were considered, indicating that a confusion with biogenic slicks and low-wind areas is the main source of errors in oil pollution detection.
Inter-tidal zones globally are currently decreasing due factors including coastal development and sea-level rise that have put increasing pressures on these fragile environments (Murray et al, 2019). Such zones have implications for natural processes including bio-diversity, blue-carbon and coastal flood protection as well as for human processes including blue-economies, port-authorities and ecosystem services. Often inter-tidal areas are under-studied due to the inherent difficulties and danger of directly accessing them and the expense of existing mapping techniques. This talk will show how time-averaged inter-tidal bathymetry can be estimated by the Temporal Waterline (TWL) method and used to routinely monitor these zones.
Research at the UK National Oceanography Centre (NOC) (Bell et al, 2016; Bird et al 2018) has led to the development of a novel method for estimating bathymetry across the intertidal zone known as the Temporal Waterline Method (TWL). TWL processing, like spatial waterline methods requires a time-series of images spanning the tidal range and an estimate of the water-level at the time of each image. As well as being an active research topic, methods based on the use of fixed position X-Band Radar have been developed into a commercial service delivered by Marlan Maritime Technologies Ltd. Recent work at NOC has further developed the TWL methodology to exploit SAR (Synthetic Aperture Radar) satellite data, unlocking the potential for new (potentially global) intertidal zone monitoring solutions. Initially the UK tide-gauge archive was used to estimate the water-levels but this only allows processing of areas within 10 to 20 km of the tide-gauge. The most recent work has been to extend the method to use outputs from the FES2014 global tidal model, so that the TWL method can be applied globally.
The talk will present the methodology and show several UK case studies at both local and regional scales with examples of validation against LiDAR surveys. The potential for the routine monitoring of regional areas will be shown. The limitations and areas for improvement will also be discussed.
References:
Bell, Paul S.; Bird, Cai O.; Plater, Andrew J. 2016 A temporal waterline approach to mapping intertidal areas using X-band marine radar. Coastal Engineering, 107. 84-101. https://doi.org/10.1016/j.coastaleng.2015.09.009
Bird, Cai O.; Bell, Paul S.; Plater, Andrew J. 2017 Application of marine radar to monitoring seasonal and event-based changes in intertidal morphology. Geomorphology, 285. 1-15. https://doi.org/10.1016/j.geomorph.2017.02.002
Murray, Nicholas J.; Phinn, Stuart R.; DeWitt, Michael; Ferrari, Renata; Johnston, Renee; Lyons, Mitchell B.; Clinton, Nicholas; Thau, David; Fuller, Richard A. 2019 The global distribution and trajectory of tidal flats. Nature 565, no. 7738. 222-225.
https://www.nature.com/articles/s41586-018-0805-8?unique_ID=636808896127280679
Green macroalgae blooms have been persistently affecting the coasts of Brittany (France) since the 1970’s, causing losses of income to the fishing and touristic sectors. Macroalgae typically proliferate in confined coastal waters when nitrogen inputs, carried by rivers, exceed the assimilative capacity of the ecosystem. The current and tides uproot the macroalgae, leaving them stranded on beaches, where their decomposition might cause serious threats to animal and human’s health. Since 2002, the French Algae Technology and Innovation Center (CEVA) has been monitoring macroalgae surfaces on beaches at 95 sites using aerial photography. Along with this monitoring effort, the French government launched in 2010 anti-algae plans, with the aim of reducing nitrogen inputs and ultimately containing green macroalgae blooms frequency and severity.
Long-term estimates of macroalgae surfaces are however not available, limiting the understanding of temporal trends in green tides and the efficiency of macroalgae bloom reduction measures. Using freely and openly available Landsat imagery archives over 35 years (1984-2019), we automatically detected and quantified green macroalgae surfaces at four highly affected sites in Northern Brittany (Schreyers et al., 2021). Mean macroalgae coverage were characterized at annual and monthly scales. We demonstrate important interannual and seasonal fluctuations in macroalgae surfaces. Over the studied period, green macroalgae blooms did not show a decrease in extent at three out of the four studied sites, despite an observed decrease in nitrogen concentrations for the rivers draining the study sites.
In addition to this long-term trend analysis, we explored the potential of Sentinel-2 imagery for macroalgae surface detection and surface quantification. Sentinel-2 provides higher-spatial resolution (10 to 20 m) as well as more frequent observations than the Landsat sensors. Given the high temporal variability and persistent cloud coverage in Brittany, increasing the number of imagery available for detection can improve the accuracy of monitoring. We compare Sentinel-2 MSI derived macroalgae surfaces with i) Landsat-8 OLI estimates and ii) the aerial photography estimates from CEVA. We ultimately discuss the advantages and limitations of satellite-based monitoring of macroalgae proliferations and potential applications to other affected systems, for example in the Caribbean Sea, West Africa and the Gulf of Mexico where Sargassum bloom events have intensified since 2011.
References:
Louise Schreyers, Tim van Emmerik, Lauren Biermann, and Yves-François Le Lay. 2021. "Spotting Green Tides over Brittany from Space: Three Decades of Monitoring with Landsat Imagery" Remote Sensing 13, no. 8: 1408. https://doi.org/10.3390/rs13081408
Geospatial Information Authority of Japan (GSI) has nationwide monitored surface changes by conducting interferometric SAR (InSAR) analysis. L-band InSAR is advantageous for measuring surface displacements in Japan because high coherence can be achieved even in mountainous areas with vegetation and steep slope. This is why Advanced Land Observing Satellite-2 (ALOS-2) satellite, whose operation started in 2014 by Japan Aerospace Exploration Agency (JAXA), is suitable for our ground deformation monitoring. We have processed the ALOS-2 data to monitor crustal deformation in Japan, and have detected surface displacements associated with earthquakes, volcanic activities, and land subsidence.
Volcano monitoring is one of our main missions. There are 111 active volcanoes in Japan, and hazardous eruptions have repeatedly occurred historically; such as the 2014 Ontake eruption in Japan that led to 58 casualties. Therefore, it is desired to continuously monitor volcanic activities and detect anomalous signals to mitigate volcanic hazard. InSAR is a powerful tool to monitor volcanic activity, while it often suffers from atmosphere-related noises etc. and likely prevents from detecting anomalies. It is crucial for volcano observation to overcome the shortcoming of conventional InSAR–based monitoring.
In order to observe spatio-temporal development of surface displacements that slowly proceed over medium to long term on volcanoes, GSI has developed a software for InSAR time series analysis (GSITSA) in which small baseline subset (SBAS) algorithm is implemented. The analysis can improve a signal-to-noise ratio by substantially reducing atmospheric noises, and thus enables us to monitor temporal evolution of slow deformation. With the background, we have conducted the InSAR time series analysis using more than 20 SAR images acquired for the period from 2014 to 2021. The time series analyses have successfully unveiled surface displacements with a few cm/year or slower at several active volcanoes, which can be hardly identified by conventional InSAR analysis. The striking point is that the time series analysis using L-band data of ALOS-2, unlike C-band, can map surface displacement over volcanoes due to high coherence even in mountainous areas. The information on spatio-temporal variation of surface displacements can be a good indicator to know a degree of volcanic activity, and thus the ALOS-2 InSAR time series analysis is about to be one of major tools for evaluation of volcanic activity in Japan. In this presentation, we mainly report our volcano monitoring by ALOS-2 InSAR time series analysis, and further demonstrates effectiveness of L-band in comparison to C-band.
The lack of regular, area-wide observations of snow mass (snow water equivalent, SWE) is a main gap in monitoring of the global cryosphere. Repeat-pass differential SAR interferometry (RP-InSAR) offers a well-defined approach for mapping SWE at high spatial resolution by measuring the path delay of the radar signal propagating through a dry snowpack. In C-band and lower radar frequencies the absorption and scattering losses in dry seasonal snow are small so that the change in the InSAR phase delay of the signal reflected from the snow/ground interface is directly related to the snow mass that accumulated during the repeat-pass time span. A critical issue for routine application of RP-InSAR is the temporal decorrelation caused by changes in the complex backscatter signal. In C-band comparatively moderate snowfall amounts may already cause complete decorrelation whereas this effect is of less concern in L-band. The use of L-band RP-InSAR for SWE mapping has by now been limited to case studies because of the lack of regular repeat observations over snow-covered land surfaces. In anticipation of the enhanced repeat observation capabilities of upcoming L-band SAR systems, such as ROSE-L, we conducted field campaigns in two Alpine test sites in order to consolidate the SP-InSAR retrieval tools and evaluate the product performance. The use of C-band RP-InSAR for SWE retrievals was as well studied. Among the topics addressed are the effects of snowfall intensity on temporal decorrelation, the impacts of topography and land cover type on the RP-InSAR phase and resulting SWE, and possible effects of snow structural properties on the observed signal.
In winter 2019-2020 the studies on SAR SWE retrievals focussed on L-band and C-band RP-InSAR applications in the Upper Engadin region in Switzerland, based on repeat-pass SAR time series acquired by ALOS PALSAR-2, respectively Sentinel-1. ALOS PALSAR-2 strip map mode data of different tracks were acquired in 14, respectively 28 day repeat intervals, starting from snow free conditions in October 2019 until the beginning of the main melting period in March 2020. On days of the PALSAR overflights snow and soil parameters were measured at several locations in snow pits and along transects. Continuous time series on the temporal evolution of snow accumulation, melt events and meteorological parameters are available from automatic weather stations. In case of dry snow, the coherence of the PALSAR data is preserved also in case of snowfall, although intensive snowfall events cause a substantial decrease in coherence. SWE retrievals based on 14-day PALSAR data show for dry snow cases good agreement with insitu snow measurements. However, a continuous time series throughout winter could not be obtained because the 2019/20 PALSAR data set suffers from lack of continuity, comprising data from 4 different tracks, and also because on some acquisition dates the coherence was affected by transient melt. Limitations in the spatial coverage are imposed by the steep topography causing large gaps due to layover and foreshortening, calling for the acquisition of near-coincident data from ascending and descending passes. The Sentinel-1 6-day RP-InSAR data show suitable coherence during stable dry snow conditions also in case of light snowfall, but melt events and moderate and intensive snowfall causes complete decorrelation.
During March 2021 an experimental airborne campaign was carried out in the high Alpine test site Wörgetal/Kühtai near Innsbruck, addressing two complementary approaches for SWE measurements: (i) exploring the measurement concept of a geostationary C-band SAR mission for retrieval of dense SWE time series; (ii) consolidating the assessment of the RP-InSAR based SWE retrieval method and performance in support of mission preparation for ROSE-L and Sentinel-1 NG. The activities were performed by DLR and ENVEO within the ESA project SARSimHT-NG. Here we provide a first account on the studies related to objective (ii), as the airborne data enable the direct comparison of dual frequency (C- and L-band) polarimetric measurements. Within the period 2nd to 19th March 2021 multiple C- and L-Band SAR data were acquired F-SAR flights on 7 days spanning two snow fall events of about 10 cm and 40 cm mean fresh snow depth. On days of the F-SAR overflights vertical profiles of physical snow parameters were measured in snow pits at different locations as well as snow depths along transects.
In the presentation we report on the impact of snowfall and other environmental parameters on C- and L-band InSAR coherence over time periods ranging from several hours to days, and report on the performance of the InSAR SWE retrievals performed over the test sites Wörgetal/Kühtai and Engadin. The campaign activities and the presented results are of relevance for preparing snow monitoring activities with current and upcoming L-Band SAR missions including SAOCOM A/B, NISAR and especially the Copernicus Expansion Mission ROSE-L. Furthermore, the investigations are of relevance for exploring the combined use of C- and L-Band SAR data for monitoring main snowpack parameters.
Rice is a staple cereal crop in Asia, and the continent accounts for about 90% of global rice production and consumption. Rice-planted area map is important parameter to estimate rice production for food security or economy, and also to quantify the carbon, water cycle or methane emission via paddy fields. Rice is mainly cultivated in the rainy season, Synthetic Aperture Radar (SAR) is therefore a robust tool because it penetrates cloud cover. Recently, machine learning has been widely used in many land cover related researches and distinct results were reported, however, limitation is that it needs a large amount of training data, it is normally time and cost consuming task. In this research, we utilized the combination of unsupervised and supervised classification to efficiently produce the training data. Training data were generated from the k-means classification results for the sampled regions, then a random forest classifier was applied to ALOS-2 PALSAR-2 ScanSAR data to identify rice-planted area in Southeast Asian countries including Cambodia, Laos PDR, Thailand, and Vietnam. It is also difficult to identify rice-planted area in this region since there are high variations in rice phenology. In order to compensate for the variations, we used time-series metrics of calculated from SAR data. Classification models were fine-tuned for the target countries, and most of the models had an accuracy of 0.9 or better. Independent verification through visual interpretation using very high resolution images (VHRs) on Google Earth also showed a high degree of consistency with the classification results. The developed paddy field maps showed high accuracy in most countries and regions. In addition to these verifications, resulted map was compared with other rice planted area maps developed by Vietnam National Space Center(VNSC) and CNES/CESBiO under the collaboration of the VNSC’s 2019 Committee on Earth Observation Satellite (CEOS) chair initiative on rice monitoring. VNSC developed rice map for the Vietnam using time series Sentinal-1 data, and CNES/CESBiO developed rice map also using time series Sentinal-1 for four countries including Cambodia, Laos PDR, Thailand, and Vietnam under the ESA GEORICE project. Comparison results showed good consistency between these products by visual interpretation. However, further quantitative comparison or verification using in-situ data and national statistics, as well as application to other regions, seasons and years, is necessary to confirm the effectiveness of proposed methodology.
Agricultural production consumes the largest share of the world's water resources, using up to 90% of available freshwater. As demand for agricultural products is estimated to increases in the future, the accompanying intensification of production will put even more pressure on available water resources. This development makes the sector even more vulnerable to the increasing impacts of climate change on hydrological conditions and demands an even more efficient water use. Detailed knowledge of the spatial and temporal state of soil moisture, which is a key parameter in plant nutrition, agricultural production and environmental research, can help addressing these challenges. While in-situ measurement methods can be used for local monitoring, they are unsuitable for regional and even global application. In this regard, high resolution surface soil moisture data for regional and local monitoring (down to precision farming level) are still lacking at a global scale. Here, the increasing spatial and temporal resolution of current and future Synthetic Aperture Radar (SAR) satellite missions (e.g. Sentinel-1, ALOS-2/4, NISAR, ROSE-L) can help to overcome this problem. Nevertheless, the SAR missions come with individual limitations regarding soil moisture estimation. While the C-band SAR mission Sentinel-1 provides high temporal and spatial resolution, it has reduced sensitivity to soil characteristics under certain vegetation coverage due to its short wavelength. On the other hand, L-band SAR missions like ALOS-2 better penetrate covering vegetation, while its temporal and spatial resolution is reduced compared to C-band SAR missions.
Using low pass filtering as well as vegetational detrending, we developed an algorithm using high-resolution Sentinel-1 SAR timeseries for estimating soil moisture based on a change detection approach, so called alpha approximation (Balenzano et al. 2011). Tested and validated over the Rur catchment, comprising a diverse cropping structure and located in the federal state of North-Rhine Westphalia in the West of Germany, it showed encouraging results with a mean R² of 0.46 and an unbiased RMSE (uRMSE) of 5.84 %. Nevertheless, especially during the growing season, the estimated soil moisture shows a bias compared to the in-situ measured soil moisture for some crops. The reason for this is the changing sensitivity of the C-band backscattering signal to a uniform change in soil moisture occurring during the growing period. To overcome this problem, L-band ALOS-2 data is used, being less affected by vegetation cover and more sensitive to changes in soil moisture. By combining both timeseries, the high spatial and temporal resolution of C-band Sentinel-1 is used to fill the gaps between the sparse ALOS-2 recordings, while benefiting from the lower vegetation sensitivity of L-band. In this regard, this study evaluates different methods for assembling both microwave frequency bands, e.g. using L-band ALOS-2 soil moisture estimations as boundary conditions for C-band Sentinel-1 soil moisture estimation or matching changes in C-band backscattering signal to the co-located changes of L-band backscattering signal. The resulting algorithm for soil moisture estimation will be integrated within an automated workflow, using a cloud-computing platform (e.g. Google Earth Engine, CODE-DE). This enables fast processing without the use of local computational infrastructure. It will be validated over an Apulian test site, reflecting the diverse agricultural landscape of the Mediterranean region.
With the increasing deployments of CubeSat and SmallSats by both Government and Commercial entities, there is a need for innovative sensors, techniques and applications. These solutions have to be compact, low power, and with improved efficiencies. Rapid progress has been made in innovative methods and sensors for detection of UV/Visible/Infrared radiation for applications in Earth Remote Sensing and other commercial areas.
Since the community has graduated from amateur experiments in the universities to building highly capable CubeSats, it’s time to look into possible science applications of these platforms. There is always a question of calibration traceability of large amount of data acquired from these CubeSat and SmallSat missions. Few of the missions are able to carry calibration hardware on board and produce calibrated data. However, some of these platforms are really compact with no space for any calibration hardware. One of the examples is the use of other large missions as transfer radiometer for earth imaging CubeSat constellations. We also have to look at other options like vicarious methods and other innovative techniques to acquire scientifically meaningful data.
The session is a high-level forum bringing together scientists and technologists involved in the research, design, and development of CubeSats for Earth Remote Sensing applications. This session comprises invited presentations by scientists presenting the data from past and present missions and show science traceability by comparing them with data from large missions. For example data from TEMPEST-D, HARP, TROPICS will be presented with their approach to calibration and validation.
In total there will be total of five presentations, including the overview and four missions presenting their latest data with their calibration approach.
The HyTI (Hyperspectral Thermal Imager) mission, funded by NASA’s Earth Science Technology Office InVEST (In-Space Validation of Earth Science Technologies) program, will demonstrate how high spectral and spatial long-wave infrared image data can be acquired from a 6U CubeSat platform. The mission will use a spatially modulated interferometric imaging technique to produce spectro-radiometrically calibrated image cubes, with 25 channels between 8 to 10.7 microns, at 13 wavenumber resolution), at a ground sample distance of approximately 60 m. The HyTI performance model indicates narrow band NEdTs of less than0 .3 K. The small form factor of HyTI is made possible via the use of a no-moving-parts Fabry-Perot interferometer, and JPL’s cryogenically-cooled (NEdT requirement can be met at dark current associated with an FPA temperature of 68 K) HOT-BIRD FPA technology. Launch is scheduled for summer 2022. The value of HyTI to Earth scientists will be demonstrated via on-board processing of the raw instrument data to generate L1 and L2 products, with a focus on rapid delivery of data regarding volcanic degassing, and land surface temperature.
HyTI uses JPL's T2SLS ‘HOT-BIRD’ focal plane array. T2SLS detectors exhibit high levels of temporal stability with respect to both gain and offset, making them an ideal candidate for HyTI, as the 6U form-factor left no room for an onboard radiometric calibration mechanism. Rather, prior to launch, the HyTI instrument will be calibrated by deriving look-up tables relating target radiance and sensor response for a suite of FPA integration times and temperatures. On-orbit, data will be calibrated to spectral radiance using these gain LUTs. It is anticipated that radiometric offset (obtained via deep space look) will be updated each orbit.
During operations the HyTI calibration will be validated using three sources: i) occasional Lunar imaging events, ii) vicarious calibration with Landsat TIIRS and Terra ASTER data sets, and iii) direct validation using the Jet Propulsion laboratory’s Lake Tahoe and Salton Sea calibration sites. This will be important, as HyTI will process from L0 to L1 on-orbit, and so the calibration must be validated (and if required, updated) in a timely manner (as most L0 data will not be transmitted to ground or archived).
In this presentation we will provide an overview of the HyTI measurement approach, the onboard data reduction and calibration approach and the spacecraft design.
The 3U Hyper-Angular Rainbow Polarimeter (HARP) Cubesat carries a compact hyper-angular imaging polarimeter (size of a small loaf of bread) aimed at the multiwavelength polarized imaging of Earth’s atmosphere from different viewing angle perspectives. The HARP system consists of a wide field of view lens, followed by a polarization optimized Philips prism, and three imaging sensors. Each sensor is furnished with a linear polarizer at a particular orientation, and a stripe filter that simultaneously select the sampled wavelengths, and the along track viewing angle for each pushbroom imager. HARP started data collection in April 2020 from the ISS orbit and is the first Hyper-Angular imaging polarimeter in space. The HARP payload produces pushbroom images at four wavelengths (440, 550, 670 and 870nm) with up to 60 viewing angles at 670 nm and up to 20 along track angles for the other three wavelengths. HARP swath consists of 94 degs in the cross track direction, allowing for a very wide coverage around the globe, and +/-57 degs in the along track direction, providing wide scattering angle sampling for aerosol and cloud particle retrieval. The HARP satellite is still active on orbit and so far have produce a large collection of scenes providing an unprecedented demonstration of the hyperangular retrieval of cloud and aerosol properties from space.
The HARP sensor was radiometrically and polarimetrically calibrated at the ground, and its post launch radiometric calibration have been intercompared with other satellite sensor including MODIS, VIIRS and ABI showing excellent results over its first year of operation. Multiple ground targets have been selected for calibration, including the high altitude lake Titicaca, in the border between Peru and Bolivia, which allow for polarized sunglint measurements with little atmospheric interference. A first cut in HARP’s post-launch polarimetric calibration has also being assessed by comparing the HARP measurements with multi-angle surface and atmospheric models. Results from this intercomparison will be discussed as part of this presentation.
In terms of level 2 performance, the Generalized Retrieval of Aerosol and Surface Properties (GRASP) algorithm has being used for the detailed retrieval of aerosol and surface properties using HARP CubeSat data. GRASP has been applied to multiple HARP scenes producing retrievals of dust, smoke, pollution and other aerosol components, including the measurements of aerosol optical depth, real and complex refractive indices, particle sphericity, single scattering albedo, etc. These retrievals will be presented and discussed in detail showing dust transport from Africa, forest fire smoke, etc. HARP has also performed the first ever hyper-angular retrieval of cloud microphysical properties using cloudbow measurements. Different than previous retrievals, the hyperangular measurements from HARP allow for cloudbow measurements with pixel resolution rather than using a composite of a large area as it has been performed by the POLDER instrument.
This presentation will discuss the performance of the HARP sensor in space tracked by intercomparisons with other satellites and general ground-based data sets. The HARP payload is a precursor to the HARP-2 polarimeter that will fly on the NASA PACE mission to collect global data on aerosol and cloud particles, which will also be introduced as part of this talk.
Temporal Experiment for Storms and Tropical Systems – Demonstration (TEMPEST-D) is a nearly 3-year NASA mission to demonstrate global observations from a multi-frequency microwave sensor deployed on a 6U CubeSat platform. TEMPEST was proposed in 2013 as an Earth Venture Instrument-2 to perform high temporal resolution observations of rapidly evolving storms using a constellation of five identical CubeSats with microwave sensors in a single orbital plane, providing 7-minute temporal sampling of rapidly-developing convective activity over 30 minutes. To demonstrate necessary capability to successfully operate the TEMPEST constellation, NASA’s Earth Venture Technology program funded the production, deployment and operation of a TEMPEST-D, a multi-frequency microwave radiometer on a 6U CubeSat, which was successfully delivered for launch less than 2 years after PDR.
TEMPEST-D was deployed from the ISS into low Earth orbit on July 13, 2018, and observed the Earth’s atmosphere nearly continuously until it re-entered on June 21, 2021. TEMPEST-D performed the first global Earth observations from a multi-frequency microwave radiometer on a CubeSat. The TEMPEST-D mission substantially exceeded expectations in terms of data quality, stability, consistency and mission duration. TEMPEST-D data were validated through inter-calibration with existing scientific and operational microwave sensors measuring at similar frequencies, including 4 MHS sensors on NOAA-19, MetOp-A, -B and -C, as well as GPM/GMI. These validation results showed that TEMPEST had comparable or better performance to much larger operational sensors in terms of instrument noise, calibration accuracy, precision and stability throughout the nearly 3-year mission.
TEMPEST-D performed detailed observations of the microphysics of hurricanes, typhoons and tropical cyclones during three consecutive hurricane seasons. Simultaneous observations by TEMPEST-D and JPL’s RainCube weather radar demonstrated physical consistency and well-correlated passive and active microwave measurements of severe weather from the two CubeSats. Quantitative precipitation estimates retrieved from TEMPEST-D data are highly correlated with standard ground radar precipitation products. TEMPEST-D also performed along-track scanning measurements constituting the first space-borne demonstration of “hyperspectral” microwave sounding observations to retrieve the height of the planetary boundary layer.
The stability, accuracy and reliability of the TEMPEST-D instrument aboard a 6U CubeSat opens a breadth of possibilities for future science missions to substantially improve the temporal resolution of cloud and precipitation observations. Together, the TEMPEST-D and RainCube CubeSat missions demonstrated the necessary technology and scientific potential to deploy coordinated constellations of small satellites with heterogeneous microwave sensors to improve understanding of microphysical processes of both clouds and precipitation.
The NASA Time-Resolved Observations of Precipitation structure and storm Intensity with a Constellation of Smallsats (TROPICS) mission will provide nearly all-weather observations of 3-D temperature and humidity, as well as cloud ice and precipitation horizontal structure, at high temporal resolution to conduct high-value science investigations of tropical cyclones. TROPICS will provide rapid-refresh microwave measurements (median refresh rate of approximately 50 minutes for the baseline mission) over the tropics that can be used to observe the thermodynamics of the troposphere and precipitation structure for storm systems at the mesoscale and synoptic scale over the entire storm lifecycle. The TROPICS comstellation mission comprises six CubeSats in three low-Earth orbital planes. Each CubeSat will host a high performance radiometer to provide temperature profiles using seven channels near the 118.75 GHz oxygen absorption line, water vapor profiles using three channels near the 183 GHz water vapor absorption line, imagery in a single channel near 90 GHz for precipitation measurements (when combined with higher resolution water vapor channels), and a single channel at 205 GHz that is more sensitive to precipitation-sized ice particles. TROPICS spatial resolution and measurement sensitivity is comparable with current state-of-the-art observing platforms. Launches for the TROPICS constellation mission are planned in 2022. NASA’s Earth System Science Pathfinder (ESSP) Program Office approved the separate TROPICS Pathfinder mission, which launched on June 30, 2021, in advance of the TROPICS constellation mission as a technology demonstration and risk reduction effort. The TROPICS Pathfinder mission has provided an opportunity to checkout and optimize all mission elements prior to the primary constellation mission. This presentation will describe the instrument checkout and calibration/validation plans and progress for the TROPICS Pathfinder mission and will include first light mission results and comparisons with current operational instruments, such as the Advanced Technology Microwave Sounder (ATMS). We will also discuss plans for the constellation mission, including recent activities to improve the data latency for near-real-time forecasting applications.
SigNals Of Opportunity: P-band Investigation (SNOOPI) will be the first on-orbit demonstration of remote sensing using Signals of Opportunity (SoOp) in P-band (240-380 MHz). P-band SoOp has the potential for spaceborne remote sensing of root-zone soil moisture (RZSM) and snow water equivalent (SWE), two variables identified as priorities in NASA’s 2017-2027 Decadal Survey for Earth Science and Applications from Space. P-band is needed to penetrate through dense vegetation to sense RZSM. The longer wavelength of P-band also increases the unwrapping interval for phase observations, which may enable a new measurement of SWE.
SNOOPI is a technology validation mission with three specific goals to test important assumptions about P-band reflectometry form orbit. First, to collect data from orbit and validate the signal scattering model, most importantly the assumption of a coherent signal. Second, to investigate possible effects of RFI on the measurements, given the prevalence of other sources in these frequencies. Third, to demonstrate robustness to uncertainty in the source location and signal strength. The SoOp observable will be used to estimate complex reflection coefficient over varied topographical land surface conditions and will be compared with forward models driven by in-situ data.
In support of the mission, analysis have been performed on key instrument and mission parameters. Evaluation of the orbital coverage of the spacecraft based on launch date are used to create instrument recording schedules. Priorities for these recordings are overpasses of SMAP Calibration/Validation (cal/val) sites, arcs of data over winter snow-covered regions and tracks on the eastern Contiguous United States (CONUS). Results from an SoOp retrieval analytical model are verified using a bit-level signal and instrument simulator. A ground-based station is designed (soon to be deployed) to monitor the noncooperative sources, in order to reduce risk due to uncertainty in knowledge of the broadcast power, spectrum shape, and orbital position of the transmitter source. SNOOPI is scheduled for delivery in August-2022.
Description :
To meet the ambition of the Earth Explorers from tomorrow being “World-class science
missions for Earth” it is time for a smart evolution of the traditional way of mission
preparation, development and implementation. The objective of the “BoostFutureEO early
phases” initiative is to tackle the following core aspects:
Provide a long-term perspective for the preparation of Earth Explorers;
Increase the maturity of innovative missions and competition;
Help decrease uncertainty on implementation costs and reduce risks;
Consider the benefits from some diversity in cost caps to allow more complex active
instruments and new platform developments.
Based on this a “global” and unique scenario for the missions implementation consisting of
five successive steps is suggest:
Step 1: New approach to a revision the Living Planet Challenges (LPS) including
observational gap analysis and preparation for the update of the EO science strategy
Step 2: New EO Mission Ideas (NEOMI)/On-boarding activities
Step 3: Call for ideas followed by Phases 0 for candidate missions and maturation
activities for ‘commended’ missions
Step 4: Selection of missions for Phase A and implementation of Phase A
Step 5: Selection of mission for implementation followed by Phase B/C/D/E1
This global scenario is cyclical and will positively impact the Earth Explorers of tomorrow –
starting from EE12. It capitalises on a strengthened interaction with the Science Community
and it additionally provides a long-term perspective for the preparation of Earth Explorer
missions (and beyond).
Within this Agora session we will introduce a general overall, the rationale for each of the
respective steps with short pitches and open the floor directly for an interactive discussion.
(Detailed sessions on the update of the ESA Living Planet Challenges and the On-boarding
for new mission ideas are foreseen within the science programme.)
Speakers:
• Vanessa Keuck (ESA)
Moderators:
• Florence Heliere (ESA)
• Vanessa Keuck (ESA)
Panelists:
• Kathy Whaler (ACEO Member)
• Mark Drinkwater (ESA)
• Pierluigi Silvestrin (ESA)
• Dominique Gillieron (ESA)
Description:
Coordination meeting between the three future HR thermal missions of ESA/EC, NASA, CNES/ISRO.
Company-Project:
FAO-SEPAL
*****FOR THIS SESSION BRING WITH YOU YOUR LAPTOP OR TABLET*****
Description:
SEPAL is a free and open source cloud computing-based platform developed by the Food and Agricultural Organization of the United Nations (FAO). SEPAL allows users to efficiently query and process satellite data, tailor their products for local needs, and produce sophisticated and relevant geospatial analyses quickly. It is a combination of Google Earth Engine and open source ORFEO Toolbox, Python, Jupyter, GDAL, R, R Studio Server, R Shiny Server, SNAP Toolkit, and OpenForis Geospatial Toolkit. Via SEPAL, users have access to cloud-computing resources to perform a variety of analyses useful for forestry, agriculture and land monitoring. The session will demonstrate the functionality of the SEPAL interface to create Sentinel 1 and 2 composites and a classified land cover map. High spatial resolution Planet data from the NICFI data program will also be used. Advanced functionality using Jupyter notebooks will be discussed, time allowing.
Participants are requested to sign up for sepal.io and for Google Earth Engine.
Company-Project:
MobyGIS/EURAC/Sinergise - eo4alps snow
Description:
During wintertime and Spring it is important to monitor snow evolution, not only for outdoor activities or civil protection, but also for hydrological balances of water resources. Eo4alps snow is an ESA funded project aiming to deliver a high-resolution quasi real-time snow monitoring to improve water resource management. It is based on a hybrid technology that merges the advantages of physical modeling with high-resolution high-frequency Earth Observation snow products. In particular, it takes advantage of high-resolution binary snow cover maps from Sentinel-2, SAR data from Sentinel-1 and coarser resolution daily optical images (e.g. Sentinel-3).
The core products are: snow covered area (SCA), snow water equivalent (SWE) and snow depth (HS) at daily update over the Alps that can be easily accessed through a dedicated platform. In this Demo we will present the Platform for the visualisation and download of the maps.
Public and private institutions interested in snow quantification can benefit from eo4alps snow project to better quantify the existing water resource stored in the target area, in order to improve the planning of water availability.
Description :
In the next decades population growth is expected to amplify current pressures on critical resources such as fresh water or food, intensify the stress on land and marine ecosystems and increase environmental pollution and its impacts on health and biodiversity. These problems will be further exacerbated by global warming and the likely impacts of climate change in human activities and the Earth system. Europe has now an unique opportunity to lead the global scientific efforts to address these challenges. In the next decade Europe will rely on the most comprehensive and sophisticated space-based observation infrastructure in the world, including an extraordinary and complementary suit of sensors on board of the Copernicus Sentinels series, the ESA’s Earth Explorers, the coming meteorological missions and different EO observation satellites planned to be launched by national space agencies and private operators in Europe. Ensure the scientific community takes full advantage from this unique opportunity and maximise its scientific and societal impact is urgent and will require a significant collaborative effort and an integrated approach to science where the synergistic use of EO satellite data, in-situ and citizen observations, advanced modelling capabilities, interdisciplinary research and new technologies will be essential elements. Sharing this vision, in January 2020, EC and ESA launched a joint Earth System Science Initiative, formalised with the signature of a working arrangement between both institutions. The initiative aims at joining forces to advance Earth System Science and provide a coordinated response to the global challenges that society is facing in the onset of this century. To put words in action, four joint Flagship Actions have been selected for kick-off in 2020 (i.e. polar changes and global impacts, Biodiversity and vulnerable ecosystems, Ocean health, Extremes and climate adaptation). Additional themes are under discussion in important topics such as water resources, food systems, carbon or health. Implementation will be based on a co-programmed approach ensuring the coordination of relevant scientific activities, calls, and work plans initiated under EC’s Horizon Europe and ESA’s FutureEO programmes. In this session, ESA and EC (DG-RTD) will present the status of the initiative to the scientific community and will offer an opportunity to discuss the plans for implementation after 2023.
Speakers:
• Maurice Borgeau, ESA
• Philippe Tulkens, EC DG-RTD
• Jean Dusard, EC DG-RTD
• Gilles Ollier, EC DG-RTD1
• Diego Fernandez Prieto, ESA
• Nicole Biebow, AWI, Germany
• Johnny Johannessen, NERSC, Norway
• Jose Moreno, University of Valencia, Spain
• Vihervaara Petteri, SYKE, Finland
Description:
In this networking session we will survey the community to understand what are the current and upcoming needs with respect to Open EO Science, and how ESA can contribute to enable better EO Science and build communities, leveraging technology. We will address topics like the Living Planet Fellowship, the Science Hub, education and available open platforms and data.
Description:
While many countries around the world continue to confront the challenges of COVID-19 and its variants; several developed and developing countries continue to face the consequences of natural hazards including severe forest fires in Algeria, Greece, the Russian Federation, Turkey, and the Unites States; destructive floods in Germany and the United States, powerful earthquakes in Haiti and Mexico, and droughts in Madagascar and Paraguay to name a few.
In case of a natural crisis, disaster relief workers often do not have up-to-date situation awareness information at local, regional, continental or even global level, which they would urgently need for many areas of operational decision-making and situation assessment. Therefore, the disaster management community seeks access to openly available, reliable data sources that can make spatially and temporally local statements on its most important operational issues. The space community continues to develop innovative solutions that can contribute to disaster risk reduction, preparedness and disaster response efforts. The International Charter Space and Major Disasters, the Copernicus Emergency Management Service and Sentinel Asia address the needs for space-based information in case of disasters. And several services and sources of data have been put at the disposal of users worldwide.
This networking session aims to bring together experts and participants to discuss ways to use the solutions developed by the space community and to identify challenges in developing countries that inhibit the use of such solutions.
Company-Project:
ESA - Network of Resources (NoR)
Description :
This agora will describe the ESA processes and possibilities to develop innovative EO technology to enable new EO observation techniques for both institutional as well as for New Space missions. Particular focus will be put, but not only, on the upstream technology and also on its potential growth, which will require technology higher autonomy and higher operational efficiency. Technological trends and examples will also be presented.
Speakers:
Josep Rosello
Description :
It is estimated that in the decade 2010-2019, the world added 1,213 gigawatts of renewable power capacity, investing overall nearly USD 2.7 trillion (not counting large hydro-electric dams). Europe is leading the clean energy transformation by progressively building up a toolbox of policies addressing climate change, an ambitious set of policy targets and a leading industry, particularly in the fields of wind and ocean energy. These policies and technological innovation lie the foundation for the rise of new business models for energy-(data)-as-a-service, greentech and climate tech companies. There is indeed a window of opportunity for space actors for further engagement as per the current policy push (new European offshore and methane strategies, among others) and market pull, and the transformation and digitization of the energy value chain. The shift to monitoring and mitigating environmental impact across the energy value chain, is likewise an emerging area to be covered, and an opportunity to support new fields of activities and engage with new stakeholders.
Hence, the proposed session aims at looking holistically at the status of the clean energy market in Europe, its policies, main actors, upcoming trends and the current and future role of space technologies, applications and services. Areas such as energy financing, market design & system planning, energy generation, transmission and distribution as well as prosumer & end user can be covered by a panel discussion and audience interaction.
Speakers:
Roland Roesch, Deputy Director of the Innovation and Technology Center, IRENA
Davide Magagna, Renewable Energy Expert, Italian Ministry of the Ecological Transition
Charlotte Bay Hasager, Professor at the Department of Wind Energy, DTU
Christine Sams, Science Impact and Applications Lead, National Oceanography Centre
Company-Project:
SAR vision - EOSAT 4 Sustainable Amazon
Description:
The use and results of the unique SarSentry deforestation and forest degradation system will be demonstrated for two ESA projects with lively animations. While existing monitoring systems, both optical and radar based, focus on deforestation, the unique SarSentry system is able to detect and quantify near real time forest degradation as well. As forest degradation is an indicator of current and future deforestation, this not only allows rapid intervention at sites where illegal activities are detected, but it also provides valuable information on the impact of supply chains of businesses related to environmental, biodiversity and carbon impact. Therefore, SarSentry can easily be integrated with other existing applications and provide support to certification, REDD and offsetting initiatives that support the objectives of the SDG’s.
SarSentry is implemented in Colombia in collaboration with 14 local stakeholders, to support the SDG’s. Different radar-based products are supporting the implementation of indicators for forest cover change, climate change and sustainable development at a local level.
SarSentry is currently also implemented in Pará Brazil. The provincial government SEMAS will integrate the system with their own monitoring related applications related to the detection of illegal activities, forest operations and agricultural land use in the Amazon Forest. Their goal is to make monitoring more efficient and faster (rapid response), and to increase revenues because of increased fiscalization and offering new monitoring services to land owners.
In both countries SarSentry is expected to support REDD+ programs, in the near future.
Company-Project:
EoPort
*****FOR THIS SESSION BRING WITH YOU YOUR LAPTOP OR TABLET*****
Description:
EOPORT proposes to do a classroom training session, where we show service providers how they can provide their value-added service on the EOPORT near real-time exploitation platform.
We first show briefly the steps involved in registering as a service provider on EOPORT and how one can deploy an example service, which is available on our public GitHub account.
Following a short overview, we do a hands-on exercise, where classroom participants use their own laptop with a browser to register the example service (prepared by us) and publish it on the EOPORT marketplace.
Participants will gain experience with the necessary steps for deploying, registering and publishing the service on the EOPORT platform.
After registering a service, participants will register a subscription for their own service to see the experience from a user’s perspective.
Duration : 70 Minutes
Description:
We invite the climate remote sensing community at all career stages to meet and build connections across organisations. It represents a particularly unique opportunity for early career researchers at LPS to meet their peers, many of whom have spent significant parts of their career working remotely during the pandemic.
Following lightning introductions from people in diverse climate EO careers, we will encourage conversations and mingling over lunch.
Description:
Meet the Ice Sheets scientist' and interact with ESA's animated globe
Description:
LST_cci is providing the capability to look at global temperature change over the land for over 20 years. Some places are experiencing significant changes, with cities at the forefront of climate change. This presentation highlights the changing temperature and urban heat island effect of different cities.
Description:
Using Sentinel-2 Time Series for LAI and Crop Type mapping in connection with Deep Learning approaches over the upper part of the Danube covering three years. Model simulations to estimate water need, biomass and yield.
Description :
This moderated roundtable will focus on the ESA CSC-4 (Copernicus Space Component) Programme to be presented at CMIN22, Long Term Scenario and unpack political, technological and impact priorities for programme implementation in this decade. It will feature the representatives of the key stakeholders from the ESA Member States delegations.
Speakers:
Veronique Mariette (FR Delegation, CNES)
Helmut Staudenrausch (DE Delegation, DLR)
Francesco Longo (IT Delegation, ASI)
Ondrej Svab (CZ Delegation) – Remote
Beth Greenaway (UK Delegation, UKSA)
Company-Project:
OVL NG - OceanDataLab
*****FOR THIS SESSION BRING WITH YOU YOUR LAPTOP OR TABLET*****
Description:
The Ocean Virtual Laboratory has been actively
developed in the past 7 years and now offers a virtual
platform to allow oceanographers to discover the
existence and then to handle jointly, in a convenient,
flexible and intuitive way, the various co-located EO and related model/in-situ datasets over dedicated regions of interest with a multifaceted point of view.
Hands on tutorial, making use of the freely available
online and standalone tools will be provided, going
through a few typical case studies highlighting the
potentiel of ESA EO data for upper ocean dynamic
analysis.
The standalone tool tutorial will include the installation of SEAScope and the python bindings, to exploit the analysis capabilities using jupyter notebooks.
https://seascope.oceandatalab.com/
https://ovl.oceandatalab.com/
Company-Project:
Cropix - SARMAP
Description:
Introduction of the functionality and the map products.
Description :
Purpose: This Agora will be a forum for discussion, at the critical time of the first Global Stocktake (GST) process. It will provide a status (the "Sprint" to GST 1) and a vision (on the "Marathon") of both European - through Copernicus - and International efforts. It will be an opportunity for feedback on priorities for further development, research, and the overall evolution in contributions of Systematic Observations to the System Approach.
Context: The Paris Agreement and the Global Stocktake process provide an unprecedented opportunity for the Earth Observation community to demonstrate its added value in addressing a key societal issue - Climate action. There is a clear policy need, a clear timeline with specific milestones, and a common ambition to address these challenges. We have started to put these ambitions into practice in Copernicus and through International coordination. Are we on the right track? what do we need to take into further consideration in the next decade? what are priorities for research and the further development of the system?
Scope: The Agora will address the European and International efforts on Systematic Observation and Earth Observation support of the Paris Agreement with emphasis on supporting MVS for the Global Stocktake.
Relevance to LPS22: This proposal addresses proposed themes on "The global climate", "Supporting national action towards Paris Goals”, and “EO and the carbon cycle”,
Format: Agora will include female and male contributions from key stakeholders. Introduction: a series of high-level keynote interventions from UNFCCC and the European Commission providing the policy context and counterpart presentations on the overall implementation strategy from the EC and CEOS SIT Chair (ESA). Two Panels: the core of the Agora, including Copernicus partners, international organizations, and research partners, and a 30 minute Q&A session. We would request a 2x90 minute Agora session, or 120 minutes minimum.
Speakers:
-Joanna Post, UNFCCC Secretariat
-Julia Marshall, German Aerospace Centre DLR
-Yasjka Mejier , European Space Agency/ESTEC
-Richard Engelen, European Centre for Medium-Range Weather Forecasts
-Lars Peter Riishojgaard, World Meteorological Organisation
Over the next decade, more than twenty space borne SAR missions are being planned or proposed by many space agencies and commercial entities. Understandably each mission is designed and optimized (orbit, crossing time, coverage, frequency, polarization, etc.) to meet the user needs and objectives of the sponsoring organization. An ideal situation will be to have an integrated capability to provide multi-frequency, multi-polarization, well calibrated interferometric global coverage at very frequent rate to monitor both slow and rapid changing surface features. Even though this will be hard to achieve by one or two missions, it would be possible to do if the dozens of planned missions can be coordinated as a “constellation”. By coordinating the characteristics of the different missions and properly selecting their exact orbit and nodal crossing, a very powerful integrated capability for users can be achieved that far exceed what can be achieved one or a combination of uncoordinated mission.
The idea about an international coordination of future SAR missions was first explored in a workshop held in May 2018 at Caltech and attended by 50 participants from all the agencies flying, or that will shortly launch space borne radar missions. There was a strong consensus that a coordinated effort will be of great value and a number of specific recommendations. Subsequently, in May 2019, a session on the same topic was organised at the ESA Living Planet Symposium (LPS-19) in Milano, Italy, attended by more than 250 persons [1] indicating the strong interest for this type of activities. The second “International Coordination of future Spaceborne SAR Missions” was supposed to take place in 2020/21 but has been postponed due to COVID-19 to September 2022 and will be organised by ESA at ESRIN, Frascati, Italy.
The work has been organised in three international working groups (WG) covering:
• WG-1: Present and future data‐visibility and access
• WG-2: Future imaging systems, challenges and opportunities
• WG-3: Data exploration, Cal/Val, fusion and assimilation.
The outcome of the activities of the 3 WG’s are regularly reported in workshops since 2018 as indicated in the previous sections. Since 2021, three thematic areas (TA) have been added to further deepen the collaboration across the WG topics and cover the following domains:
• TA-1: Whole-earth system science and data mining
• TA-2: Targeted science and applications
• TA-3: Programme coordination
The presentation will go in the details and the main results obtained so far in the frame of this activity to coordinate future Spaceborne SAR missions.
On May 30, 31 and June 1, 2018, a workshop was held at the California Institute of Technology to explore the interest in, and value of a more coordinated approach between the different organizations to achieve higher value to the user community. One of the main topic of this workshop is to make a recommendation to improve data visibility and accessibility of spacebornes SAR under the international coordination.
To understand the issues related to data discovery and data access, a working group 1 (WG-1) was established and compiled two tables. Table 1 relates to the discovery/accessibility of past archive data and Table 2 relates to discovery, tasking, and access to present and future data. WG-1found that it is feasible to construct a search engine to discover most data, past and present. Through this exercise, WG-A found that all agencies flying spaceborne SAR systems either provide all the data free of cost, or subsets of them for specific purpose or by inter agency agreements. However, if all the data has standard geometric and radiometric formats, their value will be significantly enhanced. As this result, WG-1 recommend that archival, present and future data should be easily accessible with a standard and common format, and can be easily requested and acquired electronically, free or at minimal cost. On the other hand, because the discovery of planned acquisitions is more problematic, WG-1 recommend developing a mechanism to coordinate future data acquisition and coverage by present and planned systems, as well as ground reception and processing systems for mutual benefit. In several cases, coordination between systems have led to significant benefit particularly in case of polar ice studies (Polar Space Task Group) and rapid response for natural hazards. It will be of great value if a mechanism can be put in place to extend this coordination to a larger number of applications which can benefit from expanded coverage, shorter repeat time and multiple frequency/polarized observations.
On the other hand, a critical element is to develop common standards for data formatting, geodetic projection and radiometric calibration. Another important element is to have the ability to simply search for data available from all the systems in a common app, as well as insight of planned future data acquisitions and, with appropriate credentials, be able to request future new acquisitions. In addition, there is a strong need for a high resolution, high accuracy elevation model with accurate time stamp for improved SAR processing and data inter-comparison.
This paper explains about the overview of this WG-1 analysis and recommendations to improve data visibility and accessibility of spacebornes SAR under the international coordination
A number of national and international space agencies and space organisations that operate Synthetic Aperture Radar (SAR) sensors have come together to improve coordination between SAR missions with interoperable and/or complementary characteristics. In the shorter-term the coordination focuses on currently operational and near future missions (e.g. Sentinel-1, ALOS-2/4, SAOCOM-1, NISAR, BIOMASS, TSX-NG, CSK-2G), and for the next decade, takes into consideration missions still to be defined. Working groups are tasked to address the visibility and access to SAR data products (WG1); opportunities and challenges of future imaging systems (WG2); and the exploration of multi-mission SAR data, including Cal/Val, data fusion and assimilation aspects (WG3).
Three Thematic Area (TA) subgroups have been established to support the working group activities with cross-cutting topics relating to science and applications:
• TA-1 – Polarimetric and Multi-frequency SAR – covers applications where polarimetric or multi-frequency backscatter intensity and/or polarimetric phase constitute the main measurements. TA-1 addresses applications such as Forestry, Agriculture, Wetland and Other Land Uses (i.e. the IPCC “AFOLU” themes), plus such relating to Ocean and Sea Ice.
• TA-2 – Interferometric SAR – covers applications where interferometric phase constitutes the main measurement. TA-2 covers the traditional InSAR driven applications such as Solid Earth (incl. crustal deformation, volcanoes), Glaciers/Ice Caps, Geo-hazards, and Permanent Scatterers
• TA-3 – Program and mission coordination.
The TA subgroups are to work closely with the relevant science communities to identify SAR coordination actions that could be taken in the near/mid-term (in 2020s) that would improve science/applications overall, as well as to identify gaps and missing critical elements with current and near-future missions. Looking beyond the missions currently in operation or development, the TA subgroups are to identify and prioritize the goals and objectives for SAR coordination that would vastly improve science by coordinating long-term (2030+) Earth observing SAR missions.
A first cut gap study was undertaken in 2021 to assess the SAR information requirements for six application areas: Glaciers and Ice Caps; Solid Earth Science; Hazards; Forest and Biomass; Wetlands; Agriculture and Soil Moisture [1]. The study resulted in a number of observations:
1. The most important requirement highlighted in all cases is the need to reduce observation revisit times to the order of days, or less. For the applications that rely on SAR interferometry, temporal decorrelation constitutes a major limiting factor, preventing, e.g., the tracking of fast-moving glacial or seismic events. Temporal decorrelation is also the main obstacle to the use of InSAR for forestry applications.
2. For the glacier and solid Earth applications, it was noted that systematic observations from at least 3 viewing angles are required to characterise the displacement field in three dimensions, while maintaining a zero interferometric baseline for each imaging geometry in the temporal stack. While such data can be obtained by observations from ascending and descending orbits, and alternating right- and left-looking platform roll, no mission currently comprises such plan.
3. For forestry applications, a variety of interferometric baselines would enable tomographic retrieval of forest structural parameters. This can be achieved by relaxing the orbit control, which however affects other InSAR applications.
4. Polarimetric (full scattering matrix) observations constitute a critical requirement for measurements of soil moisture, and is desired also for agriculture (crop identification). It was acknowledged that PolSAR and Pol-InSAR applications are under-developed due to the lack of global polarimetric time-series data, and that basic research in this field should be stimulated.
5. Each SAR frequency contributes with unique and complementary measurements, but multi-frequency applications also are under-developed due to lack of data for research. Simultaneous or near-simultaneous multi-frequency observation campaigns, in particular for land cover related applications, are therefore strongly encouraged.
6. It was finally noted that open data policies and free public access is vital for data democracy and science development.
The TA activity for 2022 includes undertaking a more comprehensive SAR user survey and a public open online questionnaire is available on the International SAR Coordination group website [2]. The results will be summarised to identify information gaps associated with each thematic application area, and subsequently, provide recommendations to the working groups on how these gaps can be mitigated by coordination of current and already planned missions, and for the next decade, with a vision for a comprehensive constellation system that would address the outstanding scientific requirements.
References:
[1] Rosenqvist A., Jones C., Rignot E., Simons M., Siqueira P. and Tadono T., 2021. A Review of SAR Observation Requirements for Global and Targeted Science Applications. International Geoscience and Remote Sensing Symp. (IGARSS’21). FR3.O-5.3, Virtual, 16 July, 2021.
[2] https://nikal.eventsair.com/NikalWebsitePortal/second-workshop-on-international-coordination-for-spaceborne-synthetic-aperture-radar/esa/ExtraContent/ContentPage?page=10
International coordination and collaboration as an essential element of the Earth Observation Service Continuity
Guennadi Kroupnik, Éric Dubuc, Mays Ahmad, Patrick Plourde, Geneviève Houde, Daniel De Lisle.
The Earth Observation Service Continuity (EOSC) initiative aims to identify successor solutions for Canada’s next generation RADARSAT program. Due to the wide range of user needs, the EOSC initiative is considering a diversified portfolio of access to imagery: free and open data, commercial purchase of data, international cooperation, and a dedicated Synthetic Aperture Radar (SAR) system. This paper will present an overview of the options analysis that led to the identification of international partners and their respective Earth Observation (EO) programs, highlight the current status, and path forward.
It is anticipated that certain user department needs can be met through existing or planned international space-based data assets. Potential collaboration scenario(s) to meet Canada’s Harmonized User Needs (HUN) can refer to, but is not limited to, bartering of data, harmonize requirements/technical solutions for missions, harmonization of data products, prototyping with sample Areas of Interest (AOI) to address downlink feasibility, access to infrastructure, etc.
In leveraging international capabilities for the purposes of augmenting Canada’s earth observation capabilities the following key benefits arise:
• Strengthen international relationships with key partners
• Facilitation of the harmonization between nations and their respective EO programs
• Stimulation of social and environmental benefits from the use of EO data
• Increase resilience through access to multiple source of data
• Improve compliance to end user requirements by providing the optimal mix of multi-frequency data.
• Increase resilience through access to multi-frequency capabilities
The economic potential of space based data has grown significantly in recent years. The Canadian Space Agency (CSA) is committed to continuing to progress strong partnerships with international stakeholders to best deliver data that meets the needs of the community and government priorities such as climate change.
Since the 1980s, Germany has built up considerable expertise in spaceborne SAR missions. The Shuttle Imaging Radar Missions SIR-C/X-SAR in cooperation with NASA/JPL consisted of two flights in April and September 1994 aiming to demonstrate the potential of fully polarimetric radar systems in three different frequency bands for a variety of applications. Germany developed the X-SAR radar system in cooperation with Italy, the USA developed the radar systems in C and L band. The combination of L, C and X-band, as well as the different polarizations of the acquired data takes over selected test sites are unique until today. SRTM, the Shuttle Radar Topography Mission, was a highlight in Germany’s radar activities in cooperation with NASA/JPL. Space Shuttle Endeavour took off on February 11, 2000, with the goal to map the topography of the Earth’s surface using two radar systems. Secondary antennas mounted at the end of a 60 m long boom allowed a topographic mapping of 80% of the Earth’s land surface with a height accuracy of 10 m. Germany participated in the mission with an interferometric X-band radar system (X-SAR) that acquired approximately 40% of land surface with the increased accuracy of approximately 6 m.
The actual highlight of the German spaceborne radar program is the successful implementation of the TanDEM-X mission. The first formation flying radar system was built by extending the TerraSAR-X mission by a second, almost identical satellite TanDEM-X. The resulting large single-pass SAR interferometer features flexible baseline selection enabling the acquisition of highly accurate cross-track interferograms not impacted by temporal decorrelation and atmospheric disturbances. The so-called Helix formation combines an out-of-plane (horizontal) orbital displacement by small differences in the right ascension of the ascending nodes with a radial (vertical) separation by different eccentricity vectors resulting in a helix-like relative movement of the satellites along the orbit. The primary objective of the mission, the generation of a global Digital Elevation Model (DEM) with unprecedented accuracy, was achieved back in 2016. The obtained results confirm the outstanding capabilities of the system, with an overall absolute height accuracy of just 3.49 m, which is well below the 10 m mission specification. Excluding highly vegetated and snow-/ice-covered regions, characterized by radar wave penetration phenomena and consequently strongly affected by volume decorrelation, it improves to 0.88 m. Also, the relative height accuracy, which quantifies the random noise contribution within the final DEM, is well within specifications. Finally, the product is also virtually complete with 99.89% coverage. Comparisons of the TanDEM-X DEM with SRTM or among multi-temporal TanDEM-X data revealed dramatic changes and the high dynamic in the Earth’s topography especially over ice and forests. It has been therefore decided to acquire data for a global change layer that will become available in 2022. Despite being well beyond their design lifetime, both satellites are still fully functional and have enough consumables for several additional years. Therefore, bistatic operations continue with a focus on changes in the cryosphere and biosphere. The TanDEM-X mission has been and still is the first distributed SAR system in space. It demonstrates the DLR’s capabilities in the development of highly innovative mission concepts in response to demanding mission objectives, in leading the project realization facing a number of challenges, and in directing and monitoring the entire generation process from global data acquisition through to the final digital elevation model (DEM).
TanDEM-X can be seen as a precursor for Tandem-L, a pioneering mission for climate and environmental research. Tandem-L is built on a very strong science case developed in a joint effort by eight Helmholtz research centers and an international team of more than 100 scientists. Aiming at the observation of dynamic processes in the bio-, geo-, hydro- and cryosphere, this mission requires a novel SAR instrument concept based on digital beamforming in combination with a large reflector antenna. A swath width of up to 350 km enables weekly global coverage as a precondition to observe Earth’s system dynamics. The Tandem-L mission concept is based on the two SAR satellites operating in L band allowing for innovative imaging modes like polarimetric SAR interferometry and multi-pass coherence tomography for determining the vertical structure of vegetation and ice. A unique feature and major challenge of the Tandem-L mission is the systematic generation of higher-level products, including forest height, structure and biomass, various surface deformation and displacement products, as well as digital elevation models. Additional products for applications in the hydro- and cryosphere are expected to be developed by the scientific community in the course of the mission.
Given the great success of TanDEM-X, a novel concept for an X-band SAR mission denoted as High-Resolution Wide-Swath (HRWS) mission has been proposed. It consists of a powerful main satellite acting as an illuminator as well as three much smaller receive-only relay satellites to be used in formation flight. The main satellite features up to 1200 MHz bandwidth, a frequency scanning functionality (F-SCAN) combined with multiple azimuth phase centers (MAPS) enabling high-resolution wide-swath imaging. The small satellites, following the MirrorSAR concept, operate as radar transponders and allow an effective, low-cost implementation of a multistatic interferometric system for high-resolution DEM generation and for secondary mission objectives as, for example, along-track interferometry. With HRWS, nearly 40 years of successful X-band SAR development in Germany will continue. Thus, the mission will provide data continuity for scientific, institutional and commercial users.
Company-Project:
WWU - openEO platform
Description:
• openEO Platform allows to process a wide variety of earth observation datasets in a cloud platform. This large-scale data access and processing is performed on multiple infrastructures, which all support the openEO API.
• Users interact with the API through clients. This demonstration shows the usage and capabilities of the Web Editor and the R-Client.
• The Web Editor is a webtool to interactively build processing chains by connecting the openEO processes visually. This is the most intuitive way to get in touch with openEO Platform.
• The R-Client is the openEO Platform entry point for R users and is available through the Comprehensive R Archive Network (CRAN). It facilitates the interaction with the openEO API within the R programming language and the advantages of the R-Studio IDE and R’s well known geospatial libraries.
• The classroom training teaches users how to accomplish their first a round trip through a typical openEO Platform workflow: login to openEO Platform, data and process discovery, process graph building adapted to common use cases, processing of data and the visualization of results
• By combining the approaches of the visually interactive Web Editor and the programming based R-Client users are introduced stepwise to the concepts of openEO Platform and will gradually understand the logic behind openEO.
Description:
The aim of this demo is to present to the audience features and functionalities of SNAP software that can classification. During the demo participants will learn how to process classify S2 data using the classifier installed in SNAP supported by in situ measurements.
Description :
Platforms for the Exploitation of EO data have been developed by public and private companies to foster the usage of EO data and expand the market of Earth Observation-derived information. All platforms have their user communities, but we are still in the pioneering phase rather than at mainstream usage level. This session will discuss which obstacles need to be tackled to boost platform usage on the federation level across platforms. The federation perspective is crucial as many major challenges require the integration of data or services from several platforms. Disasters are linked to climate change, climate change impacts infrastructure, infrastructure again is linked to land use and land use linked to public health. Approaches such as the Network of Resources and common design principles such as Earth Observation Exploitation Platform Common Architecture (EOEPCA) have great potential to help growing user communities, as they promise relevant resources at hand, interoperability between platforms, and hidden complexity to allow existing and new users to focus on their challenges rather than technology. So how do we make the most of our platforms? How do we grow them towards mainstream use?
Panelists:
Ingo Simonis (OGC) - Panel lead
Günther Landgraf (ESA)
Jeroen Dries (Vito)
Pedro Goncalves (terradue)
Wendy Carrara (Airbus)
Tiago Quintino (tbc) (ECMWF)
Company-Project:
EUMETSAT/ECMWF/ESA
Description:
Experts will guide the audience on a journey during which the monitoring and forecasting of recent intense pollution events, wildfires and dust storms will be demonstrated with Copernicus data and services. Focus will be given to recent events which have posed environmental threats and have had an impact in the media. The demo will address key steps including access, discovery, data handling, visualization and animation of satellite and model-based data. The demo will make use of Jupyter notebooks, which will allow for an effective data-driven storytelling. The demo material will be accessible live to participants and will be freely available.
Winds from Aeolus lidar SEA surface reFLECTance (SEA-FLECT) is an ESA funded project with the aim to explore the potential of the Aeolus observations to monitor sea surface winds.
(https://aeolus-surface-wind.aer.com/index.html)
This project aims to explore Aeolus observations beyond the main mission objectives. Where the main mission objective is to explore doppler shift of the lidar return signal, here the relation between the intensity of the lidar return signal and surface wind speed is being explored.
The physical basis of the SEA-FLECT project, is the high reflectance of the ocean white caps in the UV part of the EM spectrum. The fraction of white caps covered surface is strongly dependent on surface wind speed. The larger the wind speed, the larger the white cap fraction.
It can be expected that regions with larger surface wind speeds, are more reflective than the regions with reduced wind speed.
To explore the potential of the Aeolus surface return to monitor the surface wind speed over the open ocean, requires A) a well radiometrically calibrated Aeolus return from the range bin which contains the ocean surface, B) a well characterisation of the atmospheric contribution and C) well characterisation of the sub-surface contribution of this return.
To investigate the potential, first suitable Fields of View needs to be identified. This is done through a decision tree which classifies Aeolus observations according to presence of clouds, aerosols and signal to noise.
Aeolus surface returns over regions at various places over the global oceans, which are characterised by low chlorophyll concentrations (so-called oligotrophic) are being analysed.
In addition, the return signal over different land surfaces is being analysed to foster the understanding of the information content of the signal. This towards an basic understanding of the absolute calibration of the observations.
During the proposed oral presentation, the SEA-FLECT project is briefly introduced, together with the latest results of the analysis.
Dust modelling faces a series of challenges that should be addressed adequately towards improving the performance of numerical simulations in terms of reproducing dust life cycle components. Among several factors, well-documented in literature but yet not well-resolved, winds consist a critical aspect since they act as the determinant force on dust emission and transport. In the framework of the NEWTON (ImproviNg dust monitoring and forEcasting through Aeolus Wind daTa assimilatiON) ESA project, emphasis is given on the potential improvements on regional dust simulations attributed to the assimilation of Aeolus wind profiles. More specifically, we are performing short-term dust forecasts for two regions of interest (ROI) including the broader Mediterranean basin (ROI1) and the W. Sahara-Tropical Atlantic Ocean (ROI2). The WRF initial and boundary conditions are derived by the ECMWF IFS outputs produced with (hel4) and without (hel1) the consideration of Aeolus HLOS winds assimilation. By contrasting hel4 and hel1 experiments, we are assessing the impact of Aeolus assimilation on key meteorological parameters affecting dust emission over sources and dust transport over downwind areas. Dust numerical outputs, from both model configurations, are evaluated against ground-based (AERONET, PollyXT and EMEP) and spaceborne (LIVAS, MIDAS) observations in order to objectively assess the positive impact of Aeolus winds assimilation on dust forecasts and monitoring. For ROI1, in October 2020, there is strong evidence of a better representation of the Mediterranean desert dust outbreaks’ spatiotemporal patterns based on the hel4 experiment. Such improvements are driven by “corrections” of wind fields throughout the atmosphere. An identical analysis for ROI2 is under preparation taking advantage of the wealth of data acquired in the framework of the Joint Aeolus Tropical Atlantic Campaign (JATAC), that took place in Cape Verde (September 2021). Finally, NEWTON progress, activities and achievements are disseminated via the official website (https://newton.space.noa.gr) and the EO4Society portal (https://eo4society.esa.int/).
Global aerosol monitoring infrastructure is regularly complemented by new instrumentation to deepen our understanding of aerosols, which are key agents of the global radiative budget. Here, we introduce LARISSA (Lidar Aerosol Retrieval based on Information from Surface Signal of Aeolus) as a complementary and independent retrieval of AOD (Aerosol Optical Depth) for the Aeolus mission. LARISSA relies on the combination of lidar surface returns (LSR) from Aeolus and collocated near surface wind speeds over oceans to retrieve AOD. The proposed AOD retrieval is based on the parametrization of sea surface reflectance for non-nadir incidences (~37.5o for Aeolus) and is applied for the intensive observation period (IOP) of Aeolus in autumn 2019. First, we identified abundant LSR signals over oceans, where 19-34% (depending on the orbit) markedly exceeded the noise estimates during the IOP. Notably, this one week of the observations revealed distinct LSR gradients not only between land (strong signal) and ocean (weak signal), but also the palpable gradients between very bright (sea ice, fresh snow), bright (arid ecosystems, bare land) and dark land surfaces (productive ecosystems). Second, we discerned the reasonable agreement between the AODLARISSA estimates and the reference AOD from AEL-PRO (optimal estimation-based retrieval from extinction profiles of Aeolus) at the 14-30 m/s wind speed range for the entire IOP. This finding indicates that the sensitivity of LSR to near-wind speed estimates is somewhat palpable only at such mid-to-high wind speed range. Overall, the findings about reasonable agreement between AODLARISSA and AODAELPRO are promising because the final LARISSA algorithm will be valuable for aerosol studies as it does not require the assumptions about aerosol type or microphysics. Moreover, we identified the notable ability of Aeolus to distinguish the surface type, depending on the strength of LSR signal even for a very short period of observations. The latter finding potentially opens the window toward the auxiliary use of lidar data for the land cover classification research.
Besides the primary objective of Aeolus to measure horizontal wind profiles on a global scale, Aeolus can also provide profiles of aerosol and cloud properties as spin-off products. With its high-spectral resolution lidar ALADIN onboard, it is the first space mission able to directly measure vertical profiles of the particle extinction and backscatter coefficient independently with the so-called HSRL (high spectral resolution lidar) technique. The power of this technique is, that in contrast to elastic backscatter lidars like the one on Calipso, no aerosol/particle type has to be assumed a priori the retrieval of the particle optical property profiles. It is therefore the first time, that the so-called lidar ratio (extinction-to-backscatter ratio) can be directly retrieved from space and thus allowing particle typing.
However, Aeolus has the drawback that circular polarized light is emitted but only the co-polar component is detected. This leads to the loss of signal in case of polarizing particles like mineral dust, volcanic ash, or ice crystals. Due to this loss in signal, the backscatter coefficient is underestimated while the particle-specific lidar ratio is overestimated. This effect makes particle typing challenging.
In this presentation, we will discuss the potential and the limitations of the current Aeolus set up for the determination of particles types on the basis of 3 example measurements for which sophisticated ground-based multiwavelength-Raman-polarization lidar (i.e. PollyXT) observations are available as ground truth.
These three cases comprise urban haze observation made over Leipzig, Germany, smoke from the Australian wildfires in January 2020 measured over Punta Arenas, Chile, and Saharan Dust observation at Leipzig or on Cabo Verde.
We will present the comparison of the intensive particle properties measured from ground and from space and show to what extend Aeolus can be used for particle typing. In this context, we will discuss also the perspective for a potential Aeolus follow-on, which might have enhanced polarization capabilities.
ESA’s Aeolus satellite observations are expected to have the biggest impact for the improvement of numerical weather prediction in the Tropics. An especially important case relating to the evolution, dynamics, and predictability of tropical weather systems is the outflow of Saharan dust, its interaction with cloud microphysics and impact on the development of tropical storms over the Atlantic Ocean. The Atlantic Ocean off the coast of West Africa and the eastern Caribbean uniquely allows the study of the Saharan Aerosol layer, African Easterly Waves and Jet, Tropical Easterly Jet, as well as the deep convection in the Intertropical Convergence Zone and their relation to the formation of convective systems, and the long-range transport of dust and its impact on air quality.
The Joint Aeolus Tropical Atlantic Campaign (JATAC) deployed on Cabo Verde and the US Virgin Islands is addressing the validation and preparation of the ESA missions Aeolus, EarthCARE and WIVERN, as well as supporting the related science objectives raised above.
The JATAC campaign started in July 2021 with the deployment of ground-based instruments at the Ocean Science Center Mindelo (OSCM, Cabo Verde), including the EVE lidar, the PollyXT lidar, a W-band Doppler cloud radar and a sunphotometer. By mid-August, the CPEX-AW campaign started their operations from the US Virgin Islands with NASA’s DC-8 flying laboratory in the Western Tropical Atlantic and Caribbean with the Doppler Aerosol Wind Lidar (DAWN), Airborne Precipitation and Cloud Radar (APR-3), the Water Vapor DIAL and HSRL (HALO), a microwave sounder (HAMSR) and dropsondes. In September, a European aircraft fleet was deployed to Sal (Cabo Verde) with the DLR Falcon-20 carrying the ALADIN Airborne Demonstrator (A2D) and the 2-µm Doppler wind lidar, and the Safire Falcon-20 carrying the high-spectral-resolution Doppler lidar (LNG), the RASTA Doppler cloud radar, in-situ cloud and aerosol instruments among others. The Aerovizija Advantic WT-10 light aircraft with filter-photometers and nephelometers for in-situ aerosol characterisation was operating in close coordination with the ground-based observations from Mindelo.
More than 35 flights of the four aircraft were performed. 17 Aeolus orbits were underflown, four of which completed by simultaneous observations of three aircraft, with a perfect collocation of Aeolus and the ground-based observation for two cases. Several flights by the NASA DC-8 and the Safire Falcon-20 have been dedicated to cloud microphysics and dust events. The EVE lidar has been operating on a regular basis, while the PollyXT and several other ground-based instruments were continuously operating during the campaign period. For further characterisation of the atmosphere, radiosondes were launched up to twice daily from Sal airport. Additionally, there were radiosonde launches from western Puerto Rico and northern St Croix, US Virgin Islands. The JATAC was supported by dedicated numerical weather and dust simulations supporting the forecasting efforts needed for successful planning of the flights and addressing open science questions. While the airborne activities were completed end September, the ground-based observations are continuing into 2022.
The paper will present a JATAC overview.
The NASA CPEX-AW is part of the Joint Aeolus Tropical Atlantic Campaign (JATAC) in 2021. Specific science objectives of CPEX-AW include: 1) better understanding interactions of convective cloud systems and tropospheric winds as part of the joint NASA-ESA Aeolus Cal/Val effort over the tropical Atlantic, 2) observing the vertical structure and variability of the marine boundary layer in relation to initiation and lifecycle of the convective cloud systems, convective processes (e.g., cold pools), and environmental conditions within and across the ITCZ, 3) Investigating how the African easterly waves and dry air and dust associated with Sahara Air Layer control the convectively suppressed and active periods of the ITCZ, and 4) Investigating interactions of wind, aerosol, clouds, and precipitation and effects on long range dust transport and air quality over the western Atlantic.
The CPEX-AW science team and the NASA DC-8 aircraft were deployed to St. Croix, the US Virgin Islands, from 18 August – 10 September 2021, to address the science objectives. DC-8 is equipped with the Doppler Aerosol Wind Lidar (DAWN), Airborne Precipitation and Cloud Radar 3rd Generation (APR-3), High Altitude Lidar Observatory (HALO) Water Vapor DIAL and HSRL, High Altitude Microwave Sounding Radiometer (HAMSR), and GPS dropsondes. During the field campaign, CPEX-AW team launched soundings from the North Coast of St. Croix and the west coast of Puerto Rico. It also took colocated measurements over the saildrones that measure ocean surface and ocean current data in collaboration with the NOAA field campaign. In this overview, we will present highlights from CPEX-AW:
• More than 120 researchers including graduate students and postdocs participated in CPEX-AW in St. Croix, Puerto Rico, and remotely.
• We have flown seven research missions that collected unprecedented data from DAWN, HALO, APR-3, HAMSR, and dropsondes, in a wide arrange of conditions from strong dust outbreak events to tropical storms.
• Underflown six Aeolus overpasses for a total of 5,836 km, which provide valuable data sets for Aeolus Cal/Val and studies of impact on weather forecasting.
• Complex wind and convection in pre-Tropical Storm (TS) Ida and Ida over the Gulf of Mexico before the major Hurricane Ida made landfall, long lasting TS Kate and its interaction with dust and dry air over the central Atlantic, and dry air intrusion in Hurricane Larry.
We will also present updates on post-field campaign data analysis and modeling studies on Aeolus Cal/Val, new insights on interactions of wind, convective clouds, dry air and dust over the tropical Atlantic, and impacts of Aeolus and airborne data on numerical modeling and prediction of high-impact weather such as tropical cyclones.
MedEOS is an application development, funded by the European Space Agency as a part of the Mediterranean Regional Initiative. Its main objective is to develop and produce high-resolution, gap-free water quality products based on Earth observation (EO) data. This is achieved through combining the high temporal resolution of Sentinel-3 Ocean and Land Colour Imager (S-3 OLCI) and the high spatial resolution of Sentinel-2 Multispectral Instrument (S-2 MSI) in a process of data fusion.
The project is organized in two distinct phases that make up to 2-years of project lifetime (2021 - 2023). At the end of the first year a full set of demonstration products shall be delivered comprehending 1-year datasets over 5 different test areas around the Mediterranean coasts. In the second year, those products shall be derived and validated over the entire Mediterranean Sea coasts for a 3,5 year period, from March 2019 to September 2022.
Five pilot areas in Egypt, France, Greece, Spain, and Tunisia were selected following specific criteria to maximize the benefit and impact of the project in the scope of the Mediterranean area. They are spread across the Mediterranean exhibiting a variety of biophysical, environmental, and socioeconomic conditions, which are formed under a varying geopolitical and cultural context. This diversity allows the consortium to test the validity of the provided services at different levels:
- scientific and technical - by providing very distinct regions with requirements that cover all services and unique validation conditions;
- technical - service operations and delivery will need to adapt to the different needs and contexts of the end-users, providing extra flexibility and resilience to the overall system;
- user uptake - institutional and societal realities in the different countries will provide an in-depth analysis of the capacity of user uptake at various prevailing conditions.
Three different sets of products are developed in scope of the MedEOS project. The first group consists of five EO directly derived water quality products: Total Suspended Matter (TSM), Turbidity, Chlorophyll-a concentration (Chl-a), Secchi Depth and Colored Dissolved Organic Matter (CDOM). TSM is a key element of water quality in coastal areas. It is a well-known parameter in ocean color applications. A method using a single band, deemed robust for coastal turbid waters, is considered for this project’s application. Turbidity is a water quality feature closely related to TSM. The Turbidity Product is designed according to the ISO 7027 definition: a quantitative measurement of “diffuse radiation” expressed in Formazin Nephelometric Units (FNU). Chl-a concentration is the main pigment in phytoplankton, and a key element in ocean-color applications. Phytoplankton is responsible for primary production through photosynthesis, and an indicator of the natural processes in the water environment. Secchi Depth is used to measure water transparency in the ocean. The Secchi depth is the depth at which the Secchi disk - being lowered in the water column - is no longer detectable by a human observer from the water surface. CDOM is a key element of water quality in coastal areas, and a well-known feature in ocean color applications. Its presence is mostly determined by freshwater outflow, and it can be used as a proxy for dissolved oxygen. All these products are originally generated independently using Sentinel-2 and Sentinel-3 data.
In the next step, data fusion methods are used to generate everyday products with a high spatial resolution, closer to that of Sentinel-2. When needed, this step is preceded by gap-filling to reduce the impact of cloud cover, which can affect the fully operational application of the data fusion methods. The data fusion techniques employed in MedEOS are based on calibrating a model between a pair of fine and coarse resolution images acquired at the same date in the past and applying this model on a coarse resolution image acquired in the present to derive a synthetic fine resolution image of the current situation. In the context of land application, data fusion is usually applied on the ground reflectance level. However, spatiotemporal data fusion of Sentinel-2 and Sentinel-3 data in ocean color applications have to adapt existing approaches tested and validated in land applications to the much more dynamic context of water. Applying data fusion at EO direct products, instead of water reflectance, ensures a better comparability between both satellite sensors and guarantees the use of the best state-of-art EO direct product algorithm for each of them.
The resultant daily high resolution EO direct products are then used to generate the second set of MedEOS products - EO indirectly derived water quality products. This group contains the following products: Faecal Bacterial Contamination Indicators, Eutrophication Indicators, Harmful Algal Blooms, and Global Environmental Anomaly Detection. Faecal bacterial contamination indicators depend on continental inputs (e.g., river discharges, sewer system outflows) and on bacteria resilience capacity in the coastal ocean. Three parameters are used to monitor the risk of coastal waters contamination: Faecal Bacteria Decay Rate (T90); Faecal Bacteria Vulnerability Index; and Local Bacterial Concentrations. Eutrophication indicators are highly conditioned by nutrient abundance, therefore phosphorus and nitrogen concentrations are extracted from Copernicus biogeochemical model for the Mediterranean Sea. Derived concentration using multiple regression models based on image spectral bands and other EO direct products (Secchi Depth and Chl-a) and/or provided by in-situ measurements if available will be considered. These concentration estimates are to be used together with Chl-a to compute the Eutrophication Index. Chlorophyll-a concentration derived from EO datasets and/ or the Eutrophication index are used to narrow down areas where Harmful Algal Blooms (HABs) could be detected. Spectral indices designed for specific HABs communities are further employed to derive either qualitative or, when possible, quantitative information about the intensity of the blooms. Based on the multi-feature datasets obtained from both EO direct and indirect products, a global model will be developed to automatically detect anomalies in coastal water quality, which shall indicate probable anthropogenic pollution.
The third MedEOS set of products is dedicated to river plume monitoring. Rivers are one of the main conveyors of land-based pollutants into nearshore and coastal water. To support the monitoring and assessment of river plume impact on coastal water quality, a river plume monitoring dataset will provide a systematic detection of plumes related to major rivers discharging freshwater into the Mediterranean basin. The methodology is also applicable to large plumes that can be generated by urban sewer system outflows (e.g., major marine outfalls), which can largely affect coastal water quality. The detection includes the plume spatial delineation, out of which a set of characteristic features are evaluated, e.g., plume extension, orientation, extent and concentration threshold. The MedEOS algorithm produces this information by using multiple inputs, which should deliver a signature related to the river or sewer system plume. The main input EO products considered for now are Sea Surface Temperature, turbidity, Total Suspended Matter, Chl-a and Sea Surface roughness. Synthetic Aperture Radar (SAR) imagery can be also used to provide plume extension, because e.g., runoff plumes are associated with specific surface slick characteristics. Moreover, the algorithm is also applicable with numerical model outputs; thus, offering perspectives to complement the daily tracking obtained from EO products by higher frequency (1 h to 1 day) tracking from ocean models. User requirements related to the river plume monitoring service are populated following dedicated activities of the project and as a result of the dissemination and communication campaign of MedEOS.
All the services described above will be implemented in services4EO, Deimos EO Exploitation Platform solution. This platform will hold all service development, integration, deployment, delivery and operation activities. Its design and deployment will be driven by the need to come up with services that are easily tailored to the real operational conditions, accepted by the users, and become a constituent element of the users’ business as-usual working scheme.
For more information please visit the MedEOS website: https://medeos.deimos.pt/. This project has received funding from the European Space Agency Contract No. 4000134062/21/I-EF.
Sea surface salinity (SSS) represents one of the Essential Climate Variables (ECVs) defined by the Global Climate Observing System (GCOS). Ocean circulation, climate variability and water cycle are deeply impacted by salinity variations. Moreover, salinity is strongly affected by freshwater input from rivers and land run-off as well as atmospheric forcings, as precipitation and evaporation. Defined as a concentration basin, the Mediterranean Sea represents a hot spot for the characterization of salinity variability, requiring dedicated efforts to monitor and understand ongoing changes and their potential impacts at regional and global scales.
Nowadays, an increased number of moorings and floating buoys provide accurate SSS measurements in the Mediterranean Sea. However, in situ data have poor coverage in time and space, hindering the monitoring of SSS patterns and their space-time variations and trends. Conversely, satellite remote sensing provides SSS surface data at high spatial and temporal resolution, complementing the sparseness of in situ dataset.
Here, we describe an improved configuration of the multidimensional optimal interpolation algorithm originally proposed by Buongiorno Nardelli et al. (2012; 2016) and Droghei et al., (2016; 2018), specifically designed to provide a new daily SSS dataset at 1/16° grid resolution covering the entire Mediterranean Sea (hereafter Med L4 SSS). Two main improvements have been introduced in this regional algorithm: the inclusion of remotely-sensed salinity observations data from multiple satellite missions [i.e. NASA’s Soil Moisture Active Passive (SMAP) and ESA’s Soil Moisture and Ocean Salinity (SMOS) satellites] and a new background (first guess) field built by blending a Mediterranean in situ monthly climatology close to main rivers’ outflow, with the upsized and temporally interpolated weekly global SSS product distributed by the Copernicus Marine Environment Monitoring Service (CMEMS).
The multi-sensor regional SSS dataset has been validated against independent in situ SSS observations collected in the Mediterranean Sea between 2010-2018 and also compared with global weekly CMEMS product and Barcelona Expert Centre (BEC) regional product. The statistics of validation highlighted an improved performance of the new Med L4 SSS. The results also demonstrated that the use of a background blending in situ monthly climatology and CMEMS SSS L4 weekly product determines a reduction of the SSS errors along the coast. A power spectral density analysis highlighted that, among all the datasets, the Med L4 SSS field achieves the highest effective spatial resolution.
Mediterranean hurricanes (Medicanes) are synoptic-scale cyclones typical of the Mediterranean area which share some dynamical features with the well-known tropical cyclones during their mature stage, even if they are smaller in size: the presence of a quasi-cloud-free calm eye, spiral-like cloud bands elongated from the center, strong winds close to the vortex center and a warm core. The most intense Mediterranean Hurricane (Medicane) on record, named Ianos, swept across the Ionian Sea between 14 and 18 September 2020, affecting Southern Italy and especially Greece and its Ionian islands, where torrential rainfall and severe wind gusts caused widespread disruption, landslides, and casualties. In this study, satellite measurements from the NASA/JAXA Global Precipitation Measurement Core Observatory (GPM-CO) active and passive microwave (MW) sensors are used to analyse the precipitation structure of Ianos. Two GPM-CO overpasses, available during Ianos development and tropical-like cyclone (mature) phase, are analysed in detail. GPM Microwave Imager (GMI) measurements are used to carry out a comparative analysis of the medicane precipitation structure and microphysics processes between the two phases. The GPM-CO Dual-frequency Precipitation Radar (DPR) overpass, available for the first time during a medicane mature phase, provided key measurements and products to analyse the 3D precipitation structure in the eyewall and in the rainbands, offering further evidence of the main precipitation microphysics processes inferred from the passive MW measurement analysis. Substantial difference in the rainband precipitation structure and microphysics processes is observed between the development and the mature phase. The inferred deep convection features (updraft strength, graupel growth, presence of supercooled droplets) are related to the observed change in lighting activity between the two phases. During Ianos mature phase, the GPM-CO measurements provide evidence of weaker updraft and limited graupel size which, combined with the strong horizontal wind, inhibits cloud electrification processes, while shallow convection/warm rain processes are observed in the inner region of the eyewall. The study demonstrates the value of the GPM-CO not only to characterise medicane Ianos precipitation structure and microphysics processes with unprecedented detail, but also to provide evidence of its similarities with tropical cyclones.
The Mediterranean region has been referenced as one of the most responsive regions to climate change. The last report from the International Panel on Climate Change (IPPC), and the First Mediterranean Assessment Report of the Mediterranean Experts on Climate and environmental Change (MedECC) highlight the Mediterranean as one of the most vulnerable regions in the world to the impacts of global warming. Climate change impacts, already effective in the region are bringing with them a number of challenges in terms of agriculture sustainability and food security. Climate events are having an increasingly extreme connotation, extreme agro-meteorological events such as droughts, heatwaves or rainstorms are expected to occur with higher frequency in the coming years, making agricultural systems more fragile and increasing the inter annual variabilities of crop production and grain quality.
In this context, enhanced cooperation and information sharing in the field of agriculture became an asset among Mediterranean countries to prevent food insecurity and social instability, but also to deal with commodities market distortions, as for example unjustified price volatilities.
Earth observations (EO) data are key inputs into all the analyses and decision-making processes that are critical to guarantee food security and cropping systems resilience to recurrent agro-meteorological stress. Moreover, the combination and integration of EO derived indicators with other information layers such as agro-meteorological indicators or specific agronomical expertise provide an increasing potential to monitor the seasonal response of agricultural systems to climatic stressors.
CIHEAM is an intergovernmental organization devoted to the sustainable development of agriculture and food and nutrition security around the Mediterranean Basin. It is composed of 13 Member States (Albania, Algeria, Egypt, France, Greece, Italy, Lebanon, Malta, Morocco, Portugal, Spain, Tunisia and Turkey). In September 2012, the Agriculture Ministers of the 13 CIHEAM members, recommended that CIHEAM countries should "contribute, in close collaboration with G20 follow-up group, to the development of an information system on Mediterranean markets linked to AMIS (Agricultural Market Information System) and to share information so as to fight prices volatility within agricultural markets".This led to the creation of MED-Amin, MEDiterranean Agricultural Market Information Network (http://www.med-amin.org/en/home/), which was officially endorsed by agriculture ministers of the 13 countries in February 2014. Hosted at CIHEAM Montpellier, the MED-Amin network gathers representatives from Agricultural Ministries, statistical services and Cereals Offices from CIHEAM member states, with the aim to cooperate and share information among the national information systems on cereal markets.
Since 2017, in collaboration with the Joint Research Center of the European Commission, the MED-Amin network has developed a pilot action for monitoring crop conditions of cereals (common wheat, durum wheat and barley) taking into consideration the exploitation of remote-sensing indicators, agro-meteorological variables and quality statistical data, to provide a robust indication of the shocks (positive or negative) to be expected on future harvests.
The present contribution introduces the qualitative crop forecasting activities carried out in the frame of MED-Amin across the 2021-2022 winter crop campaign. Emphasis will be put on (i) the synergy in the use feedback from national contact points and earth observation information to derive seasonal hot spots of winter crops production and on (ii) the participative approach the partners are adopting which is made up of seasonal information exchanges between the MED-Amin Secretariat and its focal points.
The MED-Amin forecasting exercise is made of four main stages:
- A statistical data collection (MED-Amin baseline) in February, gathering area, yield and production records crop-wise and region-wise for all the thirteen partners.
- A first pre-screening analysis in March of EO derived biophysical indicators, highlighting the main negative and positive biomass accumulation hot spots on wheat and barley crops. This information is provided to the focal points for a first feedback from the field.
- A second round of pre-screening analysis takes place in May, where the previous remote sensing analyses are updated and shared again to be cross-validated.
- Results are consolidated in June based on crop conditions reports from the focal points to capture the final crop evolutions before harvest.
Three bulletins are jointly developed and disseminated through the network and the media along the crop season.
Results are discussed in view of improving information on cereal markets (production, utilization, stocks, prices, trade) within the Mediterranean region and for tending to real-time transmissions of early warnings in a context of fragility due to climate change stressors (e.g. lack of hydro-meteorological events) and global prices volatility. It illustrates the necessity to mobilize human expertise in the field for ground observations. Obtaining data on precise development stage calendars and/or calibrating indicators and models require cooperative actions between ground experts and technical services specialized in the analysis of satellite and meteorological data and using reliable and easy-to-use indicators.
The project
Soil sealing – also called imperviousness – is defined as a change in the nature of the soil leading to its impermeability. Soil sealing has several impacts on the environment, especially in urban areas and local climate, influencing heat exchange and soil permeability; soil sealing monitoring is crucial for the Mediterranean coastal areas, where soil degradation combined with drought and fires contributes to desertification.
Some artificial features like buildings, paved roads, paved parking lots, and other artifacts can be considered to have a long duration. In general, these land cover types are referred to as permanent soil sealing because the probability of coming back to natural use is low. Other land cover features included in the definition of soil sealing can be considered reversible. For them, the probability of coming back to natural use is higher. The land cover classes that are included in the reversible soil sealing have been defined with the users of the project, and include solar panels, construction site in early stage, mines and quarries, long-term plastic-covered soil in agricultural areas (e.g., non-paved greenhouses).
The project Mediterranean Soil Sealing, promoted by the European Space Agency (ESA) in the frame of the EO Science for Society – Mediterranean Regional Initiative, aims to provide specific products related to soil sealing, its degree and reversible soil sealing over the Mediterranean coastal areas by exploiting EO data with an innovative methodology capable to optimise and scale-up their use with other non-EO data. Such products have to be designed to allow – concerning current practices and existing services – a better characterisation, quantification and monitoring within time of soil sealing over the Mediterranean basin, supporting users and stakeholders involved in monitoring and preventing land degradation. The project started in March 2021, will produce the first results in March 2022 and the final products in March 2023.
The targeted products are high-resolution maps of the degree of soil sealing and the reversible soil sealing over the Mediterranean coastal areas (within 20km from the coast) for the 2015-2020 time period, at yearly temporal resolution with a targeted spatial resolution of 10m.
The team
The project team is led by Planetek Italia, and composed by ISPRA and CLS.
Planetek Italia is in charge of the development of the infrastructure, the engineering of the algorithms and the communication activities. CLS is in charge of the soil sealing mask and of the experimental reversible soil sealing processing algorithms, ISPRA of the soil sealing degree processing algorithms. The interaction with the users is led by ISPRA, institutionally involved in the land degradation theme into international and regional organisations and the national body responsible for the theme in Italy.
Methodology
Introduction
The general workflow for the production of Ulysses products is shown in Figure 1.
The processing chain is split into four parts: Pre-processing; Soil Sealed Masks Production; Permanent/reversible Soil Sealing Production; Computation of the Soil Sealing Degree.
In the pre-processing, L2A and L3 images are derived from L1C Sentinel 2 data, while, from the Sentinel 1 acquisitions, the backscatter images and the coherence images are derived. The core of the processing chain is the second step in which AI algorithms are applied to derive the soil sealing mask: a binary image in which artificially covered pixels are identified. A refinement step is required to improve the quality of the mask. In parallel to the production of the sealing mask, the identification of Reversible Soil Sealing is performed. In the final step, the soil sealing degree is computed.
The soil sealing mask and reversible sealing processing workflows
The project uses as optical source Sentinel-2 data Level 1C. All the L1C images are corrected to level 2A corresponding to Top Of Canopy calibration using MAJA processor. A final pre-processing consists of producing the level 3A cloud-free composite for each month from the previous level 2A. At this point from level3A composite images a several numbers of indices are computed: NDVI, NDTI & NDBI. From level2A are derived the PANTEX band for each cloudless acquisition date. PANTEX is a texture descriptor very powerful specially to describe urban areas. The processing of Sentinel-1 SLC data consist of producing coherence data as well as calibrated and orthorectified sigma naught product from which will be derived monthly mean maps. Then texture elements, e.g Mean and Variance, will be derived from each mean map.
Pre-processed data are supplied to a machine learning tool called “Broceliande” along with the set of in-situ data for extracting general soil sealing mask or to discriminate permanent from reversible impervious object. “Broceliande” is mainly based on 2 pieces of software, Triskele used to produce hierarchical representations of images, and Shark library used for Machine Learning. From the set of selected bands, we will build multiscale representations through the model of morphological trees from which we derive multiscale features called attribute profiles (“AP/DAP generator” stage). Such trees can also be seen as a stack of nested segmentations and thus as a generalization of the concept of mono-scale segmentation layer in GEOBIA tools. For each hierarchical representation, we will measure specific attributes (e.g., area, weight, compactness…) for all objects appearing at different scales for a given pixel. These features are thus assigned to each pixel. The next step is the use of the shark random forest classifier. Due to its relatively simple parameterization, computation efficiency, and high accuracy, we will use as a basis the Random Forest (RF) classification algorithm based on its proven experience.
The soil sealing degree
The objective of the developed methodology is to compute, for each pixel of the soil sealing mask, the degree of soil sealing as the fraction of pixel area covered by artificial surfaces. The estimation of soil sealing degree (in the range 1-100%) at subpixel level (10m spatial resolution) is challenging because of the mixed pixel and spectral similarity between natural soil and artificial surfaces. Therefore, the methodology was designed to be suitable for the Mediterranean coastal areas through automatic processing of Sentinel data.
The methodology is based on the NDVI calculation using Sentinel-2 L2A time series and exploits the correlation between the derived quasi-maximum NDVI and the soil sealing degree. The aim of using a long time series is to remove fluctuations due to the seasonality of vegetation that can partially cover the sealed areas. After several tests, Sentinel-1 GRD data were excluded from the processing because of the lower spatial resolution (compared to Sentinel-2) that makes the computation of soil sealing degree at subpixel level difficult and less reliable.
The method computes soil sealing degree performing the NDVI calibration assuming a linear relation with soil sealing degree, similarly to the method described by Gangkofner et al. (2010); the main advantage of this approach is that it does not require any training input, but only the definition of a minimum and maximum value of NDVI related respectively to 100% and 1% soil sealing degree. Moreover, a method for the automatic estimation of the calibration parameters from the quasi-maximum NDVI raster was developed in order to adapt the correlation to the different vegetation types and phenology characteristics of the Mediterranean coastal areas.
The innovation
Project’s innovations reside in multiples aspects. First, products are generated over all the Mediterranean basin for a period of 5 years. Considering the update frequency, the extent of the production area and the thousands of images involved this is a major innovation. Second, innovation resides in the technique applied for the soil sealing extraction by using “Broceliande” tool where is nested the hierarchical tree approach in previous of the classic classification operation. This innovation was the subject of two published papers :Merciol & al, GEOBIA at the Terapixel Scale: Toward Efficient Mapping of Small Woody Features from Heterogeneous VHR Scenes, 2019 and Merciol & al, Broceliande: a comparative study of attribute profiles and features profiles from different attributes, 2020. Third, the project aims at producing in an efficient and automatic fashion the soil sealing degree. The innovation of the developed methodology is related to the annual update of the map, fostering the consistency of the NDVI time series at pixel level. Moreover, the processing related to the calculation of the NDVI quasi-maximum and the definition of the calibration parameters is a major result of the development phase.
Olive trees are a drought-resistant species traditionally grown in the Mediterranean Basin, where 95 percent of global olive cultivation is located. In the last three decades, super high density (SHD) planting systems have been taking over the olive sector worldwide, since they allow mechanization of the pruning and harvest processes, and thus reducing costs. Furthermore, when irrigation practices are adopted, these so-called hedge-row plantations can reach highest yields compared to other planting systems. However hereby, the limited water resources of many drought-prone areas dedicated to olive cultivation will be faster exploited and depleted, presenting not only an environmental risk but also affecting the local population dependent on such water resources. At the same time, a surge in olive oil production may lead to higher amounts of residues, which if wrongly disposed of, can put additional pressure on the limited water resources.
In the last ten years, Moroccan olive production has more than doubled, placing the country among the five main producers worldwide. The leading producing region in the country is the Fès and Meknès Region. Here, the Saïss plain has been subject to several sociological and environmental studies, due to an ever-increasing risk of groundwater depletion. Against this background, our study aimed to assess with the help of Remote Sensing data and methods whether an intensification process has occurred in the Saïss area, as well as to evaluate the extent and dimensions of the potential impact on water resources .
In our study we worked with high resolution (HR) Rapid Eye and Planet Scope imagery of 5 and 3 metres resolution, respectively, acquired via PlanetLabs’ Education and Research Program. Due to quota limitations, only two seasonally representative HR images per year were used to extract SHD olive plantations in 2010 and 2020. For this, an unsupervised approach was developed, consisting in applying an adapted form of hierarchical clustering using a two clusters k-Means algorithm. This allowed discriminating SHD olive plantations without requiring any labelled data. Furthermore, incorporating cloud-based geospatial computing in Google Earth Engine allowed a very low computational cost using HR data to an extent of 140.000 ha.
For accuracy assessment, a supervised land use land cover (LULC) classification approach with 2020 Sentinel-1 and -2 imagery was used adopting methods based on recent findings from the literature on tree crop and land use mapping in semi-arid regions. Besides evaluating the unsupervised approach, this second step reached an overall accuracy of 89.9% and identified other land use classes with high crop water requirements in the study area and allowed addressing the main land use conversions from SHD olive plantations between 2010 and 2020.
The main findings in this study highlighted the importance of using multispectral information over vegetation indices and the good performance of high-resolution imagery for olive mapping despite of lower temporal resolution. The results of the analysis also confirmed that despite of a considerable number of new SHD plantations in the study area, there was a comparable number of abandoned orchards and land use conversions within the last ten years. Thus, the intensification process in the study area had been counterbalanced by the degradation of old plantations, and the land use change to annual crops and urbanization.
The net carbon flux from anthropogenic land use and land cover change (fLULCC) comprises about 12% of total anthropogenic CO2 emissions. Its reduction is considered essential in future pathways towards net-zero emissions, necessary to reach the climate mitigation goals of the Paris Agreement. The fLULCC constitutes thus a key component of the global carbon cycle.
Despite its importance, net fLULCC remains highly uncertain in national, regional, and global assessments, mainly because various techniques and definitions for its estimation are used. Techniques comprise modeling approaches, such as semi-empirical bookkeeping models or process-based dynamic global vegetation models that were used in previous large-scale assessments (such as in the IPCC assessment reports or the yearly budgets of the Global Carbon Project). In contrast, countries regularly report their emissions to the United Nations Framework Convention on Climate Change (UNFCCC) based on inventory-based approaches. These data also form the basis for the estimates provided by the Food and Agriculture Organization Corporate Statistical Database (FAOSTAT). These varying approaches often define net fLULCC very differently. For instance, some include all fluxes on managed land while others exclude natural fluxes on managed land, such as those related to climate variability or increasing CO2 concentration. Further, carbon cycle models are inherently uncertain because of the difficulty of simulating complex natural and human processes, while direct Earth observations have difficulties to separate between anthropogenic and natural CO2 fluxes due to their temporal and spatial co-occurrence.
Additionally, the reduction of net fLULCC, needed for pathways towards net-zero emissions, can be achieved through a combination of reducing gross emissions (e.g., by decreasing deforestation) and increasing gross removals (e.g., by fostering afforestation and reforestation). Thus, the quantification of the underlying gross components of net fLULCC, which are currently 2-4 times larger than net fLULCC on global scale, is essential but has gained only little attention in the past. Recent improvements in model resolution now enable to perform this comparison at the country level.Here we compile and analyze country-level net fLULCC as well as its gross components combining data from bookkeeping models, dynamic global vegetation models, and inventory-based approaches.
Modeled and inventory-based estimates generally show a fair agreement when the effects of diverging definitions are accounted for, though estimates are highly differing for some countries. For example, fLULCC estimates from different bookkeeping models are strongly differing in China and the United States of America. Analysis of the gross fluxes reveals that these discrepancies result from strongly differing gross sources in China, and strongly differing gross sinks in the USA, related to varying capabilities of the underlying LULCC forcing data in capturing afforestation and different model comprehensiveness in capturing specific land management practices, such as fire suppression. An additional analysis of the ratio from net to summed gross fLULCC underlines the importance of the gross components for the carbon cycle, albeit spatially heterogeneously. For example, in the USA the net fLULCC represents only about 8% of the gross fluxes while in Indonesia the net flux comprises approximately 50% of the gross fluxes, to name the most extreme ratios of countries with high cumulative emissions. Strikingly, strong discrepancies between bookkeeping estimates and the emission estimates provided by FAOSTAT and UNFCCC are evident for Russia and China. In China, bookkeeping models likely underestimate the increased carbon stock due to large-scale afforestation programs, while FAOSTAT and UNFCCC estimates potentially overestimate the afforestation effects reported from China, which assume high CO2 sequestration in afforested regions despite partly contradictory observational studies.
The newly aggregated data bridges the scales between inventory-based estimates and those from models. By comparing country-wise fLULCC estimates from varying approaches, this study reveals the general suitability of modeling approaches to assess fLULCC even on country-level. This provides the basis for independently validating emissions reported by countries, which is a legal/policy requirement e.g., for the Global Stocktake. Remaining uncertainties highlight the need for systematic evaluation by Earth observation data and their incorporation in fLULCC modeling approaches.
The Forest and Land use Declaration negotiated at the 26th climate Conference of the Parties (COP) in Glasgow, November 2021, confirmed that Tropical Moist Forests (TMFs) are a vital nature-based solution to addressing the climate and ecological emergencies. TMFs are estimated to be a net sink of carbon, storing approximately 0.8 Pg C yr-1 [1]. However, the size of this sink is declining due to human activities such as deforestation and forest degradation through logging and fire, as well as climate variability and change1. Tropical forests are therefore a patchwork of undisturbed, degraded, and secondary forests, creating regionally complex patterns of growth and carbon storage.
While there have been numerous studies exploring and quantifying the recovery rates of secondary forests, quantifying the recovery rate of degraded forests has been largely unexplored on a pan-tropical scale. As many tropical countries are participating in results-based payments frameworks such as REDD+, which includes reducing emissions from degradation and forest recovery, it is essential to be able to quantify the carbon accumulation in recovering degraded forests as well as secondary forests, which collectively, we have termed “Recovering Forests”.
Recent advances in remote sensing products have made it possible to (i) distinguish degraded forests from undisturbed and secondary forests2; and (ii) estimate the carbon sequestration rates within these forests3,4.
Here we use a combination of remote sensing derived products in a space-for-time substitution approach to quantify the carbon accumulation rates in recovering forests. This includes recovering degraded forests and secondary forests in the three major tropical biomes: the Amazon Basin, Island of Borneo and Congo Basin.
Our initial results show growth rates to be the highest in Borneo, in recovering degraded forests5. We attribute these inter-biome/forest variations in growth to differences in disturbance and environmental variables such as water deficit and temperature.
Across the three biomes, we find that the recovering degraded forests have a large carbon sink potential, owing largely to their vast areal extent (10% of forest area). Secondary forests, regrow across a smaller land area (2%) but have faster growth rates (up to 30% faster in the Amazon basin) compared to degraded forest recovery. Additionally, we find that 35% of degraded forest are subject to subsequent deforestation2,5. Ending this cycle of forest degradation-deforestation and protecting all recovering forests wherever possible is key to safeguarding their current and future carbon sink potential.
References:
1. Hubau, W. et al. Asynchronous carbon sink saturation in African and Amazonian tropical forests. Nature 579, 80–87 (2020).
2. Vancutsem, C. et al. Long-term (1990–2019) monitoring of forest cover changes in the humid tropics. Sci. Adv. 7, eabe1603 (2021).
3. Santoro, M. & Cartus, O. ESA Biomass Climate Change Initiative (Biomass_cci): Global datasets of forest above-ground biomass for the years 2010, 2017 and 2018, v2. Centre for Environmental Data Analysis http://dx.doi.org/10.5285/84403d09cef3485883158f4df2989b0c (2021) doi:10.5285/84403d09cef3485883158f4df2989b0c.
4. Heinrich, V. H. A. et al. Large carbon sink potential of secondary forests in the Brazilian Amazon to mitigate climate change. Nat. Commun. 12, 1785 (2021).
5. Heinrich, V. H.A. et al. One quarter of humid tropical forest loss offset by recover. (in Review).
Land-based mitigation plays a significant role in reducing carbon emissions and thus in meeting the goals of the Paris Agreement. However, the attribution of measured carbon fluxes to its sinks and sources and defining the land carbon uptake potential remains highly uncertain. Despite the ever-increasing availability of data, in Europe, there is still no consensus on how much carbon is taken up by the land surface. This is particularly true for Eastern Europe, which is rich in forests with potentially high carbon uptake but poor in ground-based measurement data. This area consists of 13 countries: Belarus, Bulgaria, Czech Republic, Estonia, Hungary, Latvia, Lithuania, Moldova, Poland, Romania, Slovakia, Ukraine, and western Russia (to Ural).
Here, we explore the carbon uptake potential of Eastern Europe in 2010-2019 by comparing multiple methods and datasets, in particular satellite-based L-VOD and XCO2 inversions, bottom-up estimates of biomass carbon (Xu et al., 2021), inventories, a set of Dynamic Global Vegetation Models (TRENDY) and the data-driven Bookkeeping of Land-use Emissions Model (BLUE). By combining and analysing these different datasets, we aim to (1) quantify the Eastern European land-based carbon flux of the last decade, (2) identify the spatial patterns of land carbon sinks and sources and (3) attribute their underlying drivers from land use, land management or climate change.
We find that Eastern Europe accounted for an annual carbon uptake of 0.49 Gt C a⁻¹ in 2010 2019, which is around 75% of the entire carbon uptake of Europe. However, the Eastern European land carbon sink is declining slightly. Datasets differ in the extent of the carbon sink due to various spatial resolutions, methodologies, influencing factors, or included land carbon components. Further, we map and discuss regional hot spots of carbon sinks and sources and their underlying causes from land use change and management (e.g. cropland abandonment, deforestation, forest harvest) and climate/environmental factors (e.g. fire, temperature, precipitation, soil moisture).
This study sheds light on the formerly underexplored terrestrial carbon sink in Eastern Europe. It shows that divergent land use and management dynamics are linked to changes in biomass carbon. Finally, it provides a new data basis for better understanding the underlying causes of biomass carbon fluxes in Eastern Europe, which is and will be essential for mitigating climate change in the future.
The Paris Agreement set the international objective “…to reach global peaking of greenhouse gas emissions as soon as possible … and to undertake rapid reductions thereafter in accordance with best available science...to achieve a balance between anthropogenic emissions by sources and removals by sinks of greenhouse gases in the second half of this century”, in order to keep global warming below two degrees by the end of the century. The Global Stocktake Process that was implemented following the Paris Agreement aims to define clear national targets compatible with this goal and to regularly track progress of individual countries in that direction.
Such effort requires considerable improvement of current scientific capabilities to quantify greenhouse gas (GHG) budgets and their trends at region and up to national scale. It further requires accurate attribution of budgets to natural and anthropogenic processes and the ability to link country-level GHG budgets to global budgets, including their trends, since the global budgets are the ones relevant to evaluate progress towards given temperature targets. Such an effort is currently being undertaken by the GHG community in the Regional Carbon Cycle Assessment and Processes phase-2 project (RECCAP-2), a research initiative coordinated by the Global Carbon Project [1].
Global datasets and models used in Global Carbon Budgets still show large discrepancies at regional scale, especially for small regions [2]. There are multiple reasons for such discrepancies. Among those are (i) large uncertainty in the spatial distribution of fluxes between atmospheric inversions, especially at the scale of small regions; (ii) poor agreement between atmospheric inversions and dynamic global vegetation models (DGVMs) in the sensitivity of carbon fluxes to climate variability and long-term trends; (ii) possible errors in the land-cover datasets used in the global runs; (iv) poor representation of disturbances such as fire. In the ESA-CCI RECCAP2 project, we evaluated in more detail some of these sources of uncertainty, namely the mismatch between atmospheric inversions and DGVMs (Ciais et al., submitted to this session) and the impacts of the land-cover datasets used to force DGVMs on estimated budgets and their variability, including the impact on fire dynamics.
Here, perform a set of simulations designed to quantify the uncertainty in carbon budgets as well as their variability and trends at the scale of the European RECCAP2 region as well as four countries used as case-studies: France, Germany, Italy and the UK. To constrain uncertainties from process representation and parameterization in DGVMs, we use three DGVMs: JULES, OCN and ORCHIDEE-MICT. The simulations were forced with two different land-use datasets: the Land-Use Harmonization v2 dataset used in the Global Carbon Budget 2020 [3] and the HIstoric Land Dynamics Assessment+ (HILDA+) land-use reconstruction [4]. HILDA+ reconstructs annual land use/cover change between 1960-2019 at 1 km spatial resolution, a much finer resolution than the typical scale of the land-use change (LUC) forcings used in global carbon budgets (0.25 degree for LUH2). Furthermore, HILDA+ integrates multiple data streams, from high-resolution remote sensing, to long-term land use reconstructions and statistics. These features are expected to improve estimates of fluxes from land-use and land-cover changes by DGVMs, but can also affect long-term trends and variability in net biospheric fluxes simulated by models.
Our results show that LUC forcing contributes to small differences in the net carbon budgets and their trends at European scale compared to between-model uncertainty, while it contributes to differences comparable to between-model uncertainty for land-use change fluxes (FLUC). The impact of the LUC forcing on carbon budget varies between countries and models, and given the large uncertainty of the reference datasets used to evaluate simulated fluxes (atmospheric inversions for the net budgets and bookkeeping models for FLUC), no clear pattern emerges as to whether one particular forcing leads to considerable improvements. In fire-prone regions such as Italy, the underlying land-use maps have strong influence in simulated fire emissions, but uncertainties are also dominated by between-model differences.
[1] “Global Carbon Project (GCP).” https://www.globalcarbonproject.org/reccap/index.htm (accessed Nov. 26, 2021).
[2] A. Bastos et al., “Sources of uncertainty in regional and global terrestrial CO2‐exchange estimates,” Glob. Biogeochem. Cycles, 2020.
[3] P. Friedlingstein et al., “Global Carbon Budget 2020,” Earth Syst. Sci. Data, vol. 12, no. 4, pp. 3269–3340, Dec. 2020, doi: https://doi.org/10.5194/essd-12-3269-2020.
[4] K. Winkler, R. Fuchs, M. Rounsevell, and M. Herold, “Global land use changes are four times greater than previously estimated,” Nat. Commun., vol. 12, no. 1, p. 2501, May 2021, doi: 10.1038/s41467-021-22702-2.
The Amazon Forest plays an important role in the global carbon cycle due to its large contribution to the land carbon (C) sink. However, historically it has been impacted by land use and land cover changes (LULCC) and forest fires which exacerbated during the interaction of deforestation and extreme droughts events. Studies based on field measurements and top-down estimates (inversions and remote sensing data) show a decreased trend in Amazon land C sink. Still, there are large uncertainties across top-down and bottom-up, i.e. Dynamic Global Vegetation Models (DGVMs), in terms of the spatial-temporal land C sink over Amazon. Improvements on DGVMs have been made in their representation of LULCC processes, however, further improvements and understanding of fire dynamics are needed to better estimate and represent fire emissions in DGVMs. Remote sensing-based estimates of fire emissions, such as those from the Global Fire Emissions Database (GFED4) are based on net fire emissions and assume forest fires area as carbon neutral, i.e. no postfire legacy effects on the C balance of burned forests. However, a recent study shows that net annual emissions peak 4 years after occurrence of forest fires. It is therefore paramount to account for these long-term impacts of forest fires on Amazon C emissions in order to improve the Amazon carbon budget estimates. Spatial-temporal assessments of forest fire emission estimates are possible using a range of satellite products available. In this study, a remote sensing approach is used to estimate these long-term net C emissions by forest fires in the Amazon forests. The burned area product (Fire CCI 5.1.1.) and the static biomass maps from ESA CCI are used to apply the FATE bookkeeping model to estimate the net emissions from forest fires in the Amazon. Finally, we synthetize these results with top-down and bottom-up estimates to deliver a better understand of the current trends of the Amazon natural C sink.
Introduction:
Forested ecosystems provide a significant carbon sink, absorbing roughly 3 billion tons of anthropogenic carbon annually (Canadell & Raupach, 2008). The boreal represents the largest biome in the world, 552 Mha of which is found in Canada, accounting for 28% of the boreal ecosystem globally (Brandt et al., 2013). Understanding the Earth System processes that drive land cover change and vegetation productivity in Canadian boreal ecosystems is therefore critical for accurate assessments of carbon dynamics and accumulation. Although many studies have been undertaken to understand the productivity and carbon cycle in managed forests in southern Canada (Kurz et al., 2009), less is known about carbon dynamics and land cover change transitions in other key ecosystems. In the Canadian boreal, three main sources of uncertainty stand out from the literature: the impact of the warming climate on the northern treeline, carbon estimates in wetland landscapes, and the implications of permafrost thaw. Understanding changes in carbon dynamics and land cover transition in these environments is of paramount concern, yet our carbon balance estimates for these environments are limited due to several key reasons. Lack of accessibility and significant cost are key drivers behind the lack of field studies in the remote environments in question, which has led to a lack of temporally and spatially dense ground based datasets of carbon dynamics (Lees et al., 2019; Srinet et al., 2020).
Current models of ecosystem carbon exchange driven by remote sensing still require input of ground based meteorological measurements and utilize look-up tables based on plant functional type, which limits their utility in remote areas where ground-based observations do not exist (Jones et al., 2017). In addition, there is often a scale mismatch between ground-based observations and remote sensing drivers introducing possible errors or limits when using models to make carbon exchange estimates in highly heterogenous landscapes such as the Canadian boreal. Exclusively remote sensing based methods represent an approach by which we can more directly assess changes in carbon dynamics and land cover transitions without the need for ground-based inputs, and offer a significant opportunity for addressing the sources of uncertainty and improving our predictions of future changes in these heterogenous environments (Lees et al., 2020; Schimel et al., 2015; Sims et al., 2008).
Methods:
In this paper we present some key components of a new analytical model for Canadian terrestrial vegetation carbon productivity mapping and monitoring. We exploit well established links between vegetation greenness and land surface temperature (Sims et al., 2008), and apply these to the data acquired by the European Space Agency (ESA) Sentinel-2 and -3 satellite datasets and compare these to longer time series acquisitions from the National Aeronautics and Space Administration’s (NASA) Moderate Resolution Imaging Spectroradiometer (MODIS) data. We also examine the information conveyed by microwave remote sensing data layers, from the ESA Soil Moisture and Ocean Salinity (SMOS) mission in regulating these productivity estimates under changing freeze / thaw conditions using a Hidden Markov Model (HMM) algorithm applied to the SMOS freeze/thaw data product. The SMOS data is utilized to inform on modelling transitions of freeze/thaw, estimates of growing season length, and the impacts of permafrost thaw on boreal vegetation dynamics.
Results:
Using existing land cover information to stratify key wetland and tree line focus sites across the boreal regions of the Yukon, Quebec, and Ontario, we applied our model at a continuous 7-day time-step at 30m spatial resolution from 2016-2020. Seasonal and annual photosynthetic terrestrial carbon sequestration and freeze / thaw dynamics by land cover class were then examined to improve our understanding of land cover transitions and their implications regarding vegetation productivity. We end with a discussion on the future integration of other recently acquired remote sensing datasets which can inform on the influence of soil moisture and other processes on terrestrial carbon sequestration and accumulation in the Canadian boreal.
References:
Brandt, J. P., Flannigan, M. D., Maynard, D. G., Thompson, I. D., & Volney, W. J. A. (2013). An introduction to Canada’s boreal zone: ecosystem processes, health, sustainability, and environmental issues. Environmental Reviews, 21(4), 207–226. https://doi.org/10.1139/er-2013-0040
Canadell, J. G., & Raupach, M. R. (2008). Managing Forests for Climate Change Mitigation . In Science (American Association for the Advancement of Science) (Vol. 320, Issue 5882, pp. 1456–1457). American Association for the Advancement of Science . https://doi.org/10.1126/science.1155458
Jones, L. A., Kimball, J. S., Reichle, R. H., Madani, N., Glassy, J., Ardizzone, J. V, Colliander, A., Cleverly, J., Desai, A. R., Eamus, D., Euskirchen, E. S., Hutley, L., Macfarlane, C., & Scott, R. L. (2017). The SMAP Level 4 Carbon Product for Monitoring Ecosystem Land-Atmosphere CO2 Exchange . In IEEE transactions on geoscience and remote sensing (Vol. 55, Issue 11, pp. 6517–6532). IEEE . https://doi.org/10.1109/TGRS.2017.2729343
Kurz, W. A., Dymond, C. C., White, T. M., Stinson, G., Shaw, C. H., Rampley, G. J., Smyth, C., Simpson, B. N., Neilson, E. T., Trofymow, J. A., Metsaranta, J., & Apps, M. J. (2009). CBM-CFS3: A model of carbon-dynamics in forestry and land-use change implementing IPCC standards . In Ecological modelling (Vol. 220, Issue 4, pp. 480–504). Elsevier B.V . https://doi.org/10.1016/j.ecolmodel.2008.10.018
Lees, K., Khomik, M., Quaife, T., Clark, J., Hill, T., Klein, D., Ritson, J., & Artz, R. (2020). Assessing the reliability of peatland GPP measurements by remote sensing: From plot to landscape scale. In The Science of the total environment (Vol. 766, p. 142613). Elsevier B.V. https://doi.org/10.1016/j.scitotenv.2020.142613
Lees, K., Quaife, T., Artz, R. R. E., Khomik, M., Sottocornola, M., Kiely, G., Hambley, G., Hill, T., Saunders, M., Cowie, N. R., Ritson, J., & Clark, J. M. (2019). A model of gross primary productivity based on satellite data suggests formerly afforested peatlands undergoing restoration regain full photosynthesis capacity after five to ten years. In Journal of environmental management (Vol. 246, pp. 594–604). Elsevier Ltd. https://doi.org/10.1016/j.jenvman.2019.03.040
Schimel, D., Pavlick, R., Fisher, J. B., Asner, G. P., Saatchi, S., Townsend, P., Miller, C., Frankenberg, C., Hibbard, K., Cox, P., & Pacific Northwest National Lab. (PNNL) WA (United States), R. (2015). Observing terrestrial ecosystems and the carbon cycle from space . In Global change biology (Vol. 21, Issue 5, pp. 1762–1776). Blackwell Publishing Ltd . https://doi.org/10.1111/gcb.12822
Sims, D. A., Rahman, A. F., Cordova, V. D., El-Masri, B. Z., Baldocchi, D. D., Bolstad, P. V, Flanagan, L. B., Goldstein, A. H., Hollinger, D. Y., Misson, L., Monson, R. K., Oechel, W. C., Schmid, H. P., Wofsy, S. C., & Xu, L. (2008). A new model of gross primary productivity for North American ecosystems based solely on the enhanced vegetation index and land surface temperature from MODIS . In Remote sensing of environment (Vol. 112, Issue 4, pp. 1633–1646). Elsevier Inc . https://doi.org/10.1016/j.rse.2007.08.004
Srinet, R., Nandy, S., Watham, T., Padalia, H., Patel, N. R., & Chauhan, P. (2020). Spatio-temporal variability of gross primary productivity in moist and dry deciduous plant functional types of Northwest Himalayan foothills of India using temperature-greenness model . In Geocarto international (pp. 1–13). https://doi.org/10.1080/10106049.2020.1801855
Geospatial services for security are traditionally focused on analysing isolated scenarios with potential risks for citizens’ safety. Geospatial Intelligence (GEOINT) applications as the ones for Critical Infrastructure Monitoring, Disaster Monitoring or Humanitarian Aid are widely adopted by security stakeholders, and the use of EO and collateral data is already operationalized in the associated decision chains.
In a more global context, security is an intricate subject and potential scenarios can be triggered by causes of different nature. The links between events happening in the domain of security and other domains (e.g. climate, hazards, health) are highlighted in the most relevant global policies (e.g. Sustainable Development Agenda, Sendai Framework, Paris Agreement, EU Green Deal) as well as in the work programmes of several key entities such as the Group on Earth Observations (GEO). Thus, the change from the traditional paradigm of security as isolated domain to a broader new concept of security is ongoing. One of the most relevant examples of this new security paradigm is the so called Climate Security, which refers to how climate change related events amplify existing risks in society, endangering the safety of citizens, key infrastructures, economies or ecosystems.
To make the right decisions, undertake the appropriate measures and work towards a sustainable future on Earth, it is essential to understand the link between climate and other domains, as well as its consequences, and there is where EO data can provide high-value information. Several initiatives carried out within R&I projects or cooperative frameworks by the European Union Satellite Centre (SatCen) Research, Technology Development and Innovation (RTDI) Unit are hereafter described as examples of Geospatial Intelligence applications to address Climate Security issues.
- Flood events are increasing worldwide, having an impact on the safety of citizen and related activities. In the frame of the H2020 E-SHAPE project [1], the FRIEND pilot of the Disaster Showcase on Flood Risk Assessment is being implemented. The aim is to develop services based on Sentinel-1 / Sentinel-2 data and other ancillary data (e.g. climate and statistical data) to generate time-series and automatically detect changes. The output will provide both citizens and experts with a Flood Risk & Impact Assessment tool based on indicators, time-series charts and forecast maps.
- Climate related events are having an impact on population displacement and conflicts. In the frame of the H2020 GEM project [2], a Conflict Pre-Warning Map (CPW) is being developed with a dashboard considering: 1) meteorological data; 2) statistical data on demographics, socio-economic variables and political stability; and 3) relevant ancillary datasets on conflicts and migration. The outcome will show in a single entry point the correlation between these different datasets.
- The increasing scarcity of fresh water can lead to security issues. In the frame of the GEO SPACE-SECURITY Community Activity (SSCA) [3], a pilot is being developed addressing the vulnerability of an identified region and the effects on the safety and security of population (e.g. water access and crop yield, conflict and migration). The pilot is based on Sentinel-1 interferometric products to address underground water extraction. The output will consist in maps describing the vulnerability of such areas and the risks for population and infrastructures.
[1] https://e-shape.eu/
[2] https://www.globalearthmonitor.eu/
[3] https://earthobservations.org/
Session E2.02 Climate Security: The key role of Research and Innovation, Earth Observation and co-operation to address global threats
Climate change as a direct and indirect multiplier of international crisis and conflict, and the role of Earth observation
Prof. S.A. Briggs & Prof. P. N. Cornish
At all levels – individual, local, national and international – the consequences of climate change for human security are often presented in the language of ‘risk’ and ‘threat’, framed in a mindset that is traditionally state-centric in character and instinctively defensive, reactive and military in response. As Paul Collier has noted, politics is essentially spatial and territorial – the politics of the consequences of climate change are unlikely to contradict Collier’s definition entirely.
Rather than imagine a different politics altogether, one that might allow a more perfect response to the challenges posed by climate change, here we make a grounded and pragmatic argument for doing more, and better, with the limited means already available.
Using terms derived from military strategy and operations, the consequences of climate change are often described as ‘crisis multipliers’ or ‘threat multipliers’. Thus, when climate change results in desertification and famine, conditions can be created in which known violent extremist groups, such as Boko Haram, might flourish. Similarly, the retreat of the Arctic ice cap, enabling competition for oil and gas resources and for control of navigable sea routes, might result in increased tension between NATO members on the one hand and their competitors in Russia and China on the other. These two examples suggest that the consequences of climate change can have an indirect multiplying effect on existing tensions over territory and resources.
More direct effects should also be considered, particularly when it comes to access to fresh water. Human security in northeast Syria is being affected by declining water flow in the Euphrates from Turkey, and the previous drought in Eastern Syria (Voss et al, 2013)) has already been strongly argued to have exacerbated if not directly caused increased tension in population Centres in Syria, leading to further unrest, civil war and the migration of over 1.5 million refugees to Europe. There is hence a credible link between Syrian drought and subsequent discontent in Europe, leading to the rise of authoritarian leaderships in several countries and to the conditions and sentiments that propelled the UK via a narrow majority into Brexit. The possibility of conflict between Egypt and Ethiopia over the effects of the Grand Ethiopian Renaissance Dam is openly discussed; with Egypt’s population growing by c.1.5 million per annum, by some accounts Egypt’s supply of fresh water could fall below the ‘absolute scarcity’ limit of 500m3 by 2025. Rising sea levels threaten the lives and livelihoods of c.150m people around the world and might result in resource-competitive population displacements. It is sometimes said that these and similar scenarios could have a direct multiplying effect on crisis and conflict in the context of Collier’s sparse description of politics as spatial and territorial.
Furthermore, the Brookings Institution listing of failed states shows that sub-Saharan Africa across the continent from East to West is populated almost exclusively by countries with the least robust political systems. We have there a confluence of potentially the most severe impacts of food insecurity coupled with an absence of political resilience. This could lead to migration of one or two orders of magnitude greater than that seen from Syria into Europe in the decade after 2010.
We do not argue that the state-centric and military mindsets should (or could) be discarded altogether. Neither are likely to disappear in the near future, and both can contribute to the management of the consequences of climate change. Instead, we argue for a more informed and effective response from politics, both national and international. We argue that, within the means available, the most effective response to crisis and threat multipliers is to ‘multiply’ our response to these challenges. A strategic response to the challenges of climate change would see the adoption of cross-governmental approaches in which the security and defence branches are but one component, working with others including development, health, transport, policing etc. A new area of policy might also be developed, known as ‘resource diplomacy’ – a hybrid of foreign, economic, development and security policies. The next step would be to internationalise this cross-governmental, integrated approach to ensure that best use is made of the inputs available, including EO data, information and analysis, and to ensure that the response is more anticipatory than reactive.
The UK could provide a test case for the super-informed, agency-multiplying approach of the sort we envisage. The UK Government’s Integrated Review published in March 2021 speaks of a need for ‘global resilience’ and, where climate change is concerned specifically, ‘to increase our collective resilience to the damage that has already been done, in particular supporting the most vulnerable worldwide in adapting to climate effects and nature loss.’ There is hence an increasing recognition among governments for the need for a more integrated approach to societal security and resilience capitalising on the wealth of geospatial information now becoming available through satellite data sources.
The European Union (EU) strategy on adaptation to climate change recently adopted by the European Commission (EC) addresses the importance of improved adaptation and resilience to climate change. The impact of climate change is already visible on economies, communities and ecosystems worldwide with consequences on citizens’ wellbeing as well as the natural and built environment. In this context, the EU Green Deal sets achieving climate neutrality by 2050 as key goal. Thus, the need for climate change adaptation measures and processes has become urgent and necessary. In particular, a sustainable and environmentally-friendly economy requires changes in decision making processes, practices and systems to enable the prevention of hazards and natural disasters, as well as to limit potential damages. To this aim, the EC has developed a roadmap which is based on steps such as: i) the development of national adaptation strategies; ii) the mobilisation of regions towards the identification of adaptation measures and their deployment; and iii) the identification of financing and economic models to enlarge the potential solutions. The roadmap emphasizes the need to reduce the gap between what can be achieved using proven adaptation solutions, and what is needed to achieve a rapid and far-reaching change. This is indeed a challenging issue, as it entails the implementation of transformative processes encompassing societal, environmental, administrative and financial policies and actions. Τo achieve successful climate adaptation and mitigation, alignment between climate adaptation measures and economic recovery is crucial and for that societal consensus is sine qua non. Climate adaptation and mitigation rely on innovative physical solutions, as well as deep societal changes that can only be achieved by increasing social participation in decisions related to climate adaptation and the creation of innovative region-specific regulatory, governance and bio-physical strategies.
Remote sensing can play a key role in this context. Multimodal remote sensing in particular, shows significant potential, leveraging an ever increasing diversification of available datasets in ever increasing spatial, spectral, temporal, and radiometric resolutions. Multimodal remote sensing enhances our understanding of physical phenomena by combining records acquired by different remote sensing devices and platforms and allowing higher granularity of information that can be extracted on the physical-chemical processes occurring on the ground. The ability to provide a synoptic view of large areas at regular intervals makes multimodal remote sensing fundamental in obtaining a precise characterization of nature, extent, and yields the potential of analyzing dynamic phenomena such as those affected by climate change. The proliferation of remote sensing data also gives rise to larger diversity and higher dimensionality of related datasets. This development offers the opportunity for better monitoring and more precise characterization of key environmental parameters, such as: biophysical parameters assessment; natural resources use, potentials and limits; water quality assessment; atmospheric pollution estimation; monitoring natural disasters and catastrophic events.
In this context, the H2020 Green Deal project IMPETUS aims to integrate remote sensing datasets and products into a coherent multi-scale, multi-level, cross-sectoral climate change adaptation framework to accelerate the transition towards a climate-neutral and sustainable economy. This goal will be achieved through the development and validation of Resilience Knowledge Boosters (RKBs). The RKBs are open knowledge spaces customized for different regions, within which stakeholders will be able to design, monitor and evaluate climate adaptation measures using available data on climate change impacts on the environment, society (including traditions and cultural values), economy and infrastructure. These architectures can be implemented at multiple scales, e.g., at different governance levels (local-local, region-region) or at multiple administrative levels (local-regional-international). In the context of RKBs, remote sensing data will be complemented by additional data collected on the ground and assessment methods to support decision and policy making within a process of co-creation with local stakeholder communities. The result of this approach will be the co-creation of regional Adaptation Pathways, based on the exploration and sequencing of sets of possible actions aiming at optimizing adaptation and mitigation approaches to climate change in a specific region based on the needs of local communities. This is expected to lead to increased community empowerment in terms of adopting and deploying Innovation Packages as part of coherent Adaptation Pathways. By integrating remote sensing with societal, financial, administrative, economical, and environmental inputs, the RKBs will become open federated spaces for sharing data, knowledge and experiences. The RKB solution will be deployed and validated in all seven EU biogeographical regions (Continental, Coastal, Mediterranean, Atlantic, Arctic, Boreal, Mountainous) covering key community systems, climate threats, and multi-level governance regimes.
As the climate system warms, the frequency, duration, and intensity of different types of extreme weather events have been increasing. For example, climate change leads to more evaporation that may exacerbate droughts and increase the frequency of heavy rainfall and snowfall events. That directly impacts various sectors such as agriculture, water management, energy, and logistics, which traditionally rely on seasonal forecasts of climate conditions for planning their operations.
In this context, stochastic weather generators are often used to provide a set of plausible climatic scenarios, which are then fed into impact models for resilience planning and risk mitigation. A variety of weather generation techniques have been developed over the last decades. However, they are often unable to generate realistic extreme weather scenarios, including severe rainfall, wind storms, and droughts.
Recently, different works proposed exploring deep generative models in weather generation and most explored generative adversarial networks (GAN). [1] proposed to use generative adversarial networks to learn single-site precipitation patterns from different locations. [2] proposed a GAN-based approach to generate realistic extreme precipitation samples using extreme value theory for modeling the extreme tails of distributions. [3] presented an approach to reconstruct the missing information in passive microwave precipitation data with conditional information. [4] proposed a GAN-based approach for generating spatio-temporal weather patterns conditioned on detected extreme events.
While GANs are very popular for synthesis in different applications, they do not explicitly learn the training data distribution and therefore depend on auxiliary variables for conditioning and controlling the synthesis. Variational Autoencoders (VAEs) are an encoder-decoder generative model alternative that explicitly learns the training set distribution and enables stochastic synthesis by regularizing the latent space to a known distribution. Even if one can also trivially control VAEs synthesis using conditioning variables, such models also enable synthesis control from merely inspecting the latent space distribution to map where to sample to achieve synthesis with known characteristics.
This work explores VAEs for controlling weather field data synthesis towards more extreme scenarios. We propose to train a VAE model using a normal distribution for the latent space regularization. Then, assuming extreme events in historical data are rare, we control the synthesis towards more extreme events by sampling from normal distribution tails, which should hold less common data samples. We report compelling results, showing that controlling the sampling space from a normal distribution implements an effective tool for controlling weather field data synthesis towards more extreme weather scenarios.
We used the Climate Hazards Group Infrared Precipitation with Stations v2.0 (CHIRPS) dataset \cite{funk2015climate}, a global interpolated dataset of daily precipitation providing a spatial resolution of 0.05 degrees. The data ranges from the period 1981 to the present. We experimented with a one-degree by one-degree bounding boxes around the spatial region characterized by the latitude and longitude coordinates (20,75) and (21,76) that geographically corresponds to Palghar, India. We used daily precipitation data from 01/01/1981 to 31/12/2009 as the training data set and the data from 01/01/2010 to 31/12/2019 as the test set.
In our dataset histogram, one can also visually inspect that India's monsoon period begins around day 150 in the year and goes on to around day 300. We considered sequences of 32 days in this time range and 16 bounding boxes randomly picked around the coordinate center and selected 18.000 random samples for composing our final training (14.400) and testing (3.600) sets.
Our tested architecture consists of the encoder and decoder networks. The encoder architecture consists of two convolution blocks followed by a bottleneck dense layer and two dense layers for optimizing the means and standard deviations that hold the latent space distribution that is sampled to derive the latent array following a standard normal distribution. We employed two down-sampling stages (one for each convolutional block) to reduce the spatial input dimension by four before submitting the outcome to the bottleneck dense layer. After the convolutional and dense layers, we also applied ReLU activation functions. The decoder receives a latent array as input with the size of the latent space dimension that is ingested to a dense layer to be reshaped into 256 activation maps of size 8x8x8. These maps serve as input to consecutive transposed convolution layers that up-sampling the data up to the original size. A final convolution using one filter is applied to deliver the final outcome.
For running the experiments, we used the Adam optimizer with a learning rate of 0.0001, beta1 as 0.9, and beta2 as 0.999. We implemented a warm-up period of 10 epochs before considering the regularization term in the loss, which was weighted using the beta-VAE criteria. We trained the models for 100 epochs, with 32 data samples per batch, and monitored the total loss to apply early stop. All experiments were carried out using V100 GPUs.
We evaluated our results using quantile-quantile (QQ) plots, a probability plot used to compare two probability distributions. In QQ plots, quantiles of two distributions are plotted against each other; therefore, a point on the plot corresponds to one of the quantiles of a given distribution plotted against the same quantile of another distribution. In our cause, one distribution is computed from input samples pixel values, and the other from the reconstructed samples pixel values. If these two distributions are similar, the plot points will approximately lie on the line where axis-x is equal to axis-y. If the distributions are linearly related, the points will approximately lie on a line, but not necessarily on the line where axis-x is equal to axis-y. We also plot the historical data distribution to be used as a reference in the test set plots.
First, we observed that the historical data is not precisely defining the test data distribution. That indicates the distribution of precipitation values suffered a shift from the training time interval (1980 to 2010) to the testing (2010 to 2020), especially for higher quantiles, meaning that higher precipitation values became more common than in historical data. We also observed that our trained vanilla VAE model synthesized data that matched the testing distribution up to around 70mm/day but then failed to match the quantiles for higher precipitation values (considering we trained the model using historical data, that is somewhat expected).
Concerning synthesis control, we created reference extreme weather data, considering the 10% and 30% samples with the greatest and lowest average precipitation values, and then evaluated if our sampling schema can control the synthesis towards those samples by simply varying the quantile threshold. We synthesized samples that have distributions coherent with those selected as references for more or less extreme weather field data.
We also observed that samples from the average standard deviation sampling are similar to those drawn from real data, as expected since they are more likely to happen. The samples synthesized using smaller standard deviation values depict weather fields with lower precipitation values, and the ones using larger standard deviation seem to show higher precipitation patterns.
Our work explored the efficient use of variational autoencoders as a tool for controlling the synthesis of weather fields considering more extreme scenarios. An essential aspect of weather generators is controlling the synthesis for different weather scenarios considering climate change. We reported that controlling the sampling from the known latent distribution is effectively related to synthesizing samples with more extreme scenarios in the precipitation dataset experimented in our tests. Further research expects to explore models that enable multiple distributions for more refined synthesis control and tackle data with multiple weather system distributions.
[1] Zadrozny, B., Watson, C. D., Szwarcman, D., Civitarese, D., Oliveira, D., Rodrigues, E., and Guevara, J. A modular framework for extreme weather generation, arxiv.org/abs/2102.04534, 2021.
[2] Bhatia, S., Jain, A., and Hooi, B. Exgan: Adversarial generation of extreme samples, arxiv.org/abs/2009.08454, 2020
[3] Wang, C., Tang, G., and Gentine, P. Precipgan: Merging microwave and infrared data for satellite precipitation estimation using generative adversarial network. Geophysical Research Letter, 2021.
[4] Klemmer, K., Saha, S., Kahl, M., Xu, T., and Zhu, X. X. Generative modeling of spatio-temporal weather patterns with extreme event conditioning, arxiv.org/abs/2104.12469, 2021
In recent years, climate change is posing increasing threats at global level: developed and developing countries are affected by natural hazards, drought, floods, sea level rise just to name a few. The more a country is developed, the better it reacts to climate pressure thus, even though climate change is a worldwide challenge, less developed countries are those paying the higher price. On the other side, strong efforts are devoted to monitor the Earth-Atmosphere system: new satellites, new models and environmental services (among which the role of the Copernicus program is prominent) provide continuous and accurate measurements of climate status and trends.
In this work we focus on prominent climate security issues, namely natural hazards, factors affecting food availability and climate-related health threats.
One of the main effects of evolving climate is change precipitation and temperature regimes, causing floods, drougth, evolution of the natural and anthropogenic landscape. There is a need of indicators to quantify climate risk related to different threats, contextualised on a geographic domain (specific site, region, country). In the framework of the Earth Observation for Sustainable Development (EO4SD) Climate Resilience cluster (https://eo4sd-climate.gmv.com/), a set of climate variables and indicators have been computed, using as baseline data from the Copernicus Climate Change Services (C3S, https://climate.copernicus.eu/), ESA Climate Change Initiative (ESA CCI, https://climate.esa.int/en/) and various other data sources to make available more than 30 indicators for climate screening, climate risk assessment and climate adaptation activities. Indicators have been made available to various entities: a relevant consumer is World Bank Climate Change Knowledge Portal (CCKP, https://climateknowledgeportal.worldbank.org/) that integrates spatially and temporally-integrated climate indicators within the clutry profiles. Another relevant example is the TRENgthening resilience of Cultural Heritage at risk in a changing environment through proactive transnational cooperation (STRENCH, https://www.interreg-central.eu/Content.Node/STRENCH.html) that allows managers of natural and cultural heritage sites to assess climate risk and define mitigation actions through the use of a dedicated webGIS tool fed by a large pool of climate indicators computed from models and satellite data.
The correlation between climate conditions and the effects on the animals and human being health is well known as demonstrated in several studies, while the quantification of this correlation is still under investigation especially in remote areas where the meteorological and climate information are hard to be collected. EO data, associated with epidemiological data about diseases, outbreak and other kind of social-health data, are relevant to analyse the impact and the cause-effect dynamics linking meteo-climate parameters to human and animal health. Furthermore, thanks to the large amount of climate data made available by EO systems, it is possible to predict the evolving risk for those regions where the climate conditions become favourable to disease vectors development and diffusion. In this work we show the results of a study that combines meteorological and climate parameters from heterogeneous data sources (satellite, in-situ, model) with health information related to distribution of plasmodium Falciparum as a proxy for Anophele mosquitoes diffusion in Tanzania. Open source data on plasmodium Falciparum parasite rate available from the Malaria Atlas Project database (https://malariaatlas.org/) containing monthly geo-referenced data on the number of positives and the number of tested subjects. The statistical model chosen for the analysis is based on the Bayesian approach where the outcome variable represents the posterior probability for a human subject to be affected by plasmodium falciparum and the likelihood is assumed to follow a binomial distribution: longitude, latitude and altitude are assumed to have a linear effect, while rainfalls and temperature are assumed to have a non linear effect (i.e. second order random walk).
Food security in certain areas of the world is strongly correlated to insects’ infestation: locusts have always represented a plague for populations living mainly in Africa and Asia. Despite the large use of pesticides to try keeping their diffusion under control, extension of agriculture and climate change have driven an increasing impact of locusts on human life. NGOs are facing challenges in planning support activities in areas where locusts are destroying most of the crop causing famine. Locust swarms are moving every day on long areas, estimating the precise movement could be difficult. Within the ESA EO4YEMEN project, the use of long climatological series, satellite observations and locations reported by the FAO Locust watch portal (http://www.fao.org/ag/locusts/en/info/info/index.html) have been used to develop and train AI-based modules with the scope of forecasting occurrence / apparition of a new swarms. To demonstrate the impact of climate change on locusts´ behaviour, the presence of hoppers recorded on the FAO database has been aggregated by latitude (10 degrees interval from 20W to 80E), over four periods: 1985-2004, 2004-2018, 2019 and 2020-2021 (see Figure 1): the shift from west to east of the maximum occurrence demonstrates how the patterns has changes over time. The correlation of locusts´ occurrences and climate data and indicators has been modelled with the finals scope of forecasting hooper’s presence. Four environmental parameters are used: temperature, precipitation, soil moisture and vegetation tenure. The model is trained over the 10 days before the event occurrence, and provides a warning for the length of the forecast data used to run the model . Four neural network approaches have been implemented: Fully Convolutional Neural Network (FCNN), Long short-term memory (LSTM), Convolutional LSTM (ConvLSTM), and Support Vector Machine (SVM). ConvLSTM provides best performances among the different experiments performed (accuracy: 0.96, macro average: 0.49). The main issue faced is the unbalanced database: while the FAO locust watch database provides locusts detections, a dedicated strategy has been implemented to obtain a similar amount of points with “no detection”.
Climate service centers translate scientific data generated by the scientific community into locally-relevant information that helps with decision-making and policy setting. These translations are important across all disciplines and require various levels of external data integration and fusion. Climate service centers combine future climate projections from modelling centers, remotely sensed data from satellite instruments, and ground measurements from observation networks to create climate products and services tailored to decision-makers' needs. Although the tailoring process is specific to local contexts, climate service centers face similar challenges and with more data and more platforms serving information for climate security the integration challenges are growing. This calls for extending the concept of FAIR data to FAIR climate services, with FAIR being Findable, Accessible, Interoperable, Reusable. Thus, not only data, but full climate service information systems should adhere to the FAIR principles, to be realized by R&I and cooperation to address the global threats of climate change. It requires agreements on metadata aspects for discovery, application programming interfaces (APIs) and resource models for interaction with the climate service information system. Here, Integration experts such as OGC can help because of the combination of standards setting with best practice development and experimentation. It starts with discovery, where many open issues are already addressed by R&I like accessibility which can enhance quickly thanks to RESTful APIs. With GEMET and INSPIRE registry there are promising starting points to achieve semantic interoperability, though linked data principles need to be further explored how data was captured, produced, processed, and fused to make sure that we stay on top of data complexity and integration.
On an operational level two possible approaches can be considered, responding to the global threats of climate change: First, reducing and stabilizing the levels of heat-trapping greenhouse gases in the atmosphere (“mitigation”) and/or adapting to the climate change already in the pipeline (“adaptation”). Climate services that fuse climate change scenario data with local data such as topography, economy, infrastructure, or demographics provide a solid base for adaptation modeling. Thus, these services allow us to understand how to live and adapt to the unavoidable climate change. At the same time, there is an urgent need for actions to address the mitigation side. Here, monitoring of essential variables based on Earth Observation data has still to catch-up with the many approaches based on assumptions and rough estimates.
Implemented in the UNFCCC policy frames, research and systematic observation plays an important role to foster climate information production and provision. International initiatives like the ESA climate change initiative or the European atmospheric monitoring services are built based on Satellite data production. For example a critical issue is to enhance the certainty of atmospheric trace gas monitoring like CO2 or Methane. Due to the increasing quality of affordable sensors, ground truthing analytics are implementable into the Climate Service Information Systems and related Spatial Digital Infrastructures. Data fusion of this IoT and sensor data with the Satellite images are an upcoming opportunity to enhance the precession of Earth Observation based climate services. Further more in bridging EO to the numerical model data of climate change future projection scenarios or shorter time scales like the 6Month Seasonal forecasts there is a key role of R&I and cooperation to address global threats like Hurricane prediction where new data and information products can be provided.
We will present ongoing efforts to meet these objectives, discuss experiences with operational infrastructures, and propose actions to establish climate information systems capable of answering the leading questions of a broad user community to achieve climate security and keep climate change in check.
The Copernicus Sentinel-1 mission is the first of the Sentinel dedicated missions, developed and operated by the European Space Agency (ESA) in the frame of the Copernicus Earth Observation programme led by the European Union (EU). Sentinel-1 is based on a constellation of two SAR satellites that ensure continuity of C-band SAR observations for Europe. Sentinel-1A and Sentinel-1B have been launched from Kourou on a Soyuz rocket on 3rd April 2014 and 25th April 2016 respectively.
The routine operations of the constellation are on-going and performed at full mission capacity. The mission is characterized by large-scale and repetitive observations, systematic production and a free and open data distribution policy. Sentinel-1 data are routinely used by Copernicus and many operational services, as well as in the scientific and commercial domain. The mission has allowed the development of many applications in various areas over land and seas, for routine monitoring or support to emergency management actions.
The presentation will address the mission status including an outlook on the foreseen evolutions, for instance in terms of constellation evolution (including the new concept of a satellite in stand-by in orbit), observation scenario, and products/services provided to users.
The Copernicus Sentinel-2 is an Earth Observation mission developed and operated by the European Space Agency (ESA) in the frame of the Copernicus Earth observation programme of the European Union (EU).
The mission features a Multi-Spectral Instrument (MSI) on board a constellation of two satellites: Sentinel-2A, launched in June 2015, and Sentinel-2B, launched in March 2017. Two additional satellites (Sentinel-2C and Sentinel-2D) are in production and will allow the extension of the mission, to reach typically 2 decades of seamless observations.
The Sentinel-2 mission offers an unprecedented combination of systematic global coverage of land and coastal areas, a high revisit of five days under the same viewing conditions, high spatial resolution (10, 20 and 60m depending on which spectral band), and a wide field of view for multispectral observations from 13 bands in the visible, near infra-red and short wave infra-red regions of the electromagnetic spectrum.
The Copernicus Sentinel-2 mission provides data to a large range of services and applications in many domains (e.g. land monitoring, marine environment monitoring, atmosphere monitoring, emergencies management and security) relying on multi-spectral high spatial resolution optical observations over global terrestrial and coastal regions.
Since the launch of the first satellite unit in 2015, Sentinel-2 mission has been a major asset of the Copernicus programme, as enabler of a large amount and range of downstream applications, services and scientific publications/results. Copernicus Sentinel-2 mission has become the most cited European mission in peer-reviewed scientific journals. Sentinel-2 is as well a Sentinel mis-sion with the largest data volumes being distributed. This could be achieved thanks to its data coverage, its excellent radiometric and geometric data quality, and its free & open data access policy.
This presentation provides an update on the Copernicus Sentinel-2 mission operations status, including an outlook on the foreseen evolutions (for instance in terms of constellation evolution, observation scenario, and products/services provided to users).
Copernicus Sentinel-3 is an Earth Observation mission operated jointly by EUMETSAT and the European Space Agency (ESA) in the frame of the Copernicus Earth Observation programme of the European Union (EU).
The mission features an optical suite of instruments (OLCI and SLSTR) and an altimeter (SRAL) plus a set of support instruments for precise orbit determination and wet tropospheric correction (DORIS, LRR, GNSS, MWR), each on board a constellation of two satellites: Sentinel-3A, launched in February 2016, and Sentinel-3B, launched in April 2018.
The main objective of the Sentinel-3 mission is to measure sea surface topography, sea and land surface temperature, and ocean and land surface colour with high accuracy and reliability to support ocean forecasting systems, environmental and climate monitoring. In addition a set of atmospheric products has been developed.
Since the launch of the first satellite unit in 2016, the Sentinel-3 mission has been a major asset of the Copernicus programme, delivering a large range of user data and services as an operational mission as well as extensive scientific publications/results.
This presentation provides an update on the Copernicus Sentinel-3 mission operations status, including an outlook of the foreseen evolutions.
The Copernicus Sentinel-5 Precursor mission, launched on Oct. 13 2017, is the first atmospheric Sentinel and supports Copernicus services in particular for atmospheric applications, including activities such as air quality, ozone and climate monitoring and forecasting. The instrument TROPOMI (Tropospheric Monitoring Instrument) is the single payload of the Sentinel-5 Precursor satellite and was co-funded by ESA and The Netherlands. Sentinel-5 Precursor ensures on the one hand continuity of atmospheric satellite data provision for the ESA ERS (GOME), ENVISAT (SCIAMACHY), and the USA EOS-AURA (OMI) missions in the various application and scientific domains and prepares on the other hand for the future atmospheric Sentinel-4 and Sentinel-5 instruments hosted on EUMETSAT platforms. Key features of the TROPOMI instrument are to have global coverage within one day and providing a spatial resolution of about 5.5 x 3.5 km (directly below the satellite). All 13 Sentinel-5P products have been released in staggered approach to the public starting during July 2018. The latest release took place during November 2021 including the Ozone Profile product. All data are provided to the public through the Copernicus Open Data Access Hub at https://scihub.copernicus.eu/. This presentation provides information about the Sentinel-5 Precursor mission status and latest results using TROPOMI measurements.
Copernicus Sentinel-6 Michael Freilich (Sentinel-6 MF) is an Earth Observation mission operated jointly by EUMETSAT, ESA, NASA/JPL, NOAA, and CNES, with EUMETSAT assuming the overall system responsibility. The roles and responsibilities of each of the partners capitalise on the expertise of the various programme partners, optimising the mission’s operations efficiency and financial investment. The mission is part of the Copernicus component of the EU Space Programme of the European Union (EU).
The primary mission objective is to provide continuity of ocean topography measurements beyond the TOPEX/Poseidon, Jason, OSTM/Jason-2, and Jason-3 cooperative missions, for determining sea surface height, ocean circulation, and sea level. The Sentinel-6 MF mission is designed to ensure the continuity of the nearly 30-year existing Global Mean Sea Level record (GMLS), taking the baton from the aging Jason-3 satellite as the reference altimetry mission in the Ocean Surface Topography Virtual Constellations of the Committee for Earth Observation Satellites (OST VC CEOS).
An innovative aspect of the Sentinel-6 MF is the on-board Poseidon-4 altimeter instrument, capable of providing at the same time the traditional low-resolution measurements (comparable to the Jason-3 LRM or Low Resolution Mode) and high-resolution SAR measurements (Synthetic Aperture Radar). The simultaneous provision of this data makes the Sentinel-6 MF mission the first of its kind in operations.
The Sentinel-6 MF instrument portfolio on-board related to the operations of the altimetry mission is complemented by European GNSS-POD and DORIS receivers, as well as by an Advance Microwave Radiometer with Climate capability (AMR-C) and Laser Retroreflector Array (LRA) provided by NASA/JPL.
A secondary mission objective is the collection of high-resolution vertical profiles of atmospheric temperature using GNSS Radio Occultation sounding technique, to assess changes in the troposphere and the stratosphere, and to support numerical weather predictions. The GNSS-RO receiver is provided by NASA/JPL.
The paper will focus on the Copernicus Sentinel-6 Michael Freilich mission operations status, providing also an outlook of the foreseen evolutions.
The operations of the Sentinel missions rely on the Copernicus Space Component Sentinels Ground Segment (CSC GS). The CSC GS plans the instrument observation and ensures the acquisition of Satellite data and all operations required to generate the User Level Data and make it available for user exploitation. By end 2021, more than 45 million products have been made available for open and free user access, with more than 400 PB of data downloaded by users. A major transformation of the Ground Segment architecture and operations concept is being achieved, allowing to further enhance the operations flexibility and robustness as well as to increase the capability to adapt to the user demand and needs. The new Copernicus Space Component Sentinels Ground Segment relies on an open architecture favouring the development and operations of industrial services reusable in different environments, including in commercial context. By doing so, ESA intends to foster the development of a long term European ecosystem at the edge of innovation while preventing industrial and technical locking and preserving user data sovereignty in a secure and safe environment.
The availability of such open ecosystem will foster the development and exploitation of user applications in a federated and unified environment ready to be reused in the frame of private or public initiatives, maximising the potential of Copernicus and optimising the federation with other European initiatives such as DestinE.
KR Miner1, CE Miller1, Rachel Mackelprang2, Arwyn Edwards3
1. Jet Propulsion Laboratory, California Institute of Technology
2. California State University at Northridge, California USA
3. Aberystwyth University, Wales, UK
Climate change accelerates permafrost degradation throughout the Arctic, introducing known and unknown biotoxicological hazards previously sequestered in permafrost.1
While infrastructure failures due to permafrost thaw are well documented, the biological, chemical and radiation hazards released as permafrost thaws are less understood. Both point source and disperse biotoxicological hazards present a diversity of potentially overlapping risks as the Arctic thaws. Only minimally characterized, the emergence of methanogenic bacteria, unclassified viruses, bacteria, and pathogens notably bring unknown paleo-ecosystem dynamics into the modern age.2 These species join various anthropogenic materials, including banned organic pesticides, mercury, oil, and nuclear remnants. However, variability within Arctic permafrost environments results in additional uncertainty in the location, timing, and rate of emergence for these hazards. The known locations of point-source dispersal include mining areas (Arsenic (As), Cd, Nickel (Ni)), Camp Century (~240km from Thule, Greenland), and nuclear test and submarine scuttle sites in the Russian Arctic (Kara and Barents Sea).1 Secondary, or non-point source emissions include the release of organochlorine pollutants from glaciers,3 atmospheric mercury deposition and environmental transport of microbes.
In order to understand the dynamics of this emergent risk from the new Arctic, there is an urgent need to quantify the risks before they emerge. To do this, a combination of remote sensing, in situ, and modeling are needed to better integrate micro-scale dynamics (including permafrost thaw) into Earth systems models. Paralleling the growing impacts of carbon transformation and release as methane, biotoxicological hazards are poised to become a new source of toxins in the environment as the New Arctic continues to change.
1. Miner, K. R. et al. Emergent biogeochemical risks from Arctic permafrost degradation. Nat. Clim. Chang. (2021) doi:https://doi.org/10.1038/s41558-021-01162-y.
2. Edwards, A. et al. Microbial genomics amidst the arctic crisis. Microb. Genomics 6, 1–20 (2020).
3. Miner, K. R. et al. A screening-level approach to quantifying risk from glacial release of organochlorine pollutants in the Alaskan Arctic. J. Expo. Sci. Environ. Epidemiol. 29, (2018).
Carbon dioxide (CO2) and methane (CH4) have been recognized by the International Panel of Climate Change (IPCC) as the most important of the Earth's greenhouse gases which are directly modified by human activities and which are the main contributors to global warming.
In order to reliably predict the climate of our planet, and to help inform political conventions on greenhouse gas emissions such as the Paris Agreement of 2015, adequate knowledge of both natural and anthropogenic sources of these greenhouse gases (GHG) and their feedbacks is needed. Despite the recognized importance of this issue, our current understanding about sources and sinks of CO2 and CH4 is still inadequate. This is particularly true for the Arctic, where large wetlands and permafrost areas constitute the most relevant but least quantified ecosystems for the global carbon budget.
The Arctic is warming twice as fast as the global average, making climate change’s polar effects more intense than anywhere else in the world. The Arctic accounts for half of the organic carbon stored in soils, and rising temperatures and thawing permafrost threaten its stability. The release of CO2 and CH4 from thawing permafrost will amplify global warming and further accelerate permafrost degradation. Fires in boreal forests and tundra peatlands are direct sources of CH4 and CO2 and also accelerate the thawing of permafrost, leading to the release of carbon. There is increasing, but divergent, evidence that a changing climate in the modern period has already shifted these ecosystems from net sinks of carbon to net sources, or will do so in the near future. The high-latitude natural sources also overlap with geologic CH4 sources (e.g. natural gas seeps in the Mackenzie delta), as well as anthropogenic sources from fossil fuel excavation in e.g. Alaska or Alberta, making the separation of natural vs. anthropogenic signals difficult.
Two methodologies are used to infer GHG emissions in order to understand the global carbon budget: The bottom-up approach assesses emissions by aggregating inventories fed by data about fuel consumption, local activity data, and vegetation models. In contrast, the top-down approach is based on atmospheric measurements and inverse modelling. While the latter offers the potential to verify reported emissions with independent measurements, the two approaches still disagree to a degree that prevents accurate budgeting of the major greenhouse gases and fails to fully explain recent atmospheric trends.
Three prerequisites are required to optimally apply the top-down methodology: First, the atmosphere must be measured at high spatial and temporal resolution via networks of ground-based stations and aircraft. Second, remote sensing is necessary, from satellites to give global coverage, from the ground to calibrate the satellite data, and from aircraft to bridge the scales. Third, modelling is needed, to synthesize the results and convert the concentration measurements to surface fluxes.
The CoMet 2.0 mission intends to address this objective with a multi-disciplinary approach providing relevant measurements from Arctic regions using a sophisticated suite of scientific instrumentation onboard the German research aircraft HALO to support state-of-the-art Earth system models. At the same time, CoMet intends to support and improve current and future satellite missions, which struggle to make high-quality measurements given the low sun elevation, low albedo, and adverse cloud conditions in the Arctic.
CoMet 2.0 Arctic is foreseen for a six-week intensive operation period from August to mid-September 2022 targeting boreal wetlands and permafrost areas in the Alaskan and Canadian Arctic and potentially embedded oil and gas extraction sites. A total of 120 flight hours, including transfer flights from Germany, are planned, enabling approximately 11-13 scientific flights on site.
CoMet 2.0 is a sequel to CoMet 1.0, which was successfully carried out in Europe in 2018 and concentrated on anthropogenic emissions and instrument tests. CoMet 2.0 shall now transfer the methodologies developed during the first mission to the Arctic region (and, at a later stage, into the tropics).
The High Altitude and LOng Range Research Aircraft (HALO) is a research platform for atmospheric and Earth system research, operated by the German Aerospace Center (DLR) on behalf of a consortium consisting of major German research centers and universities. Originally a standard Gulfstream G550 twin-engine jet aircraft, the aircraft has been significantly modified to make it suitable for scientific use. HALO has a maximum range of about 10 000 km or > 10-h endurance, a ceiling altitude of 14.5 km and is able to carry a scientific payload of up to 3000 kg.
For the CoMet 2.0 mission, HALO will be equipped with a suite of sophisticated instruments measuring the CO2 and CH4 columns between the aircraft and the ground using remote sensing, as well as in-situ instruments. The remote sensing package comprises the CH4 and CO2 lidar CHARM-F (Amediek et al., 2017) and the imaging spectrometer MAMAP2DL. Both instruments act as demonstrators for upcoming greenhouse gas missions. CHARM-F is operated by DLR and designed as the airborne demonstrator for the upcoming German-French MERLIN mission (Ehret et al. 2017). MAMAP2DL is operated by University of Bremen with significant similarities to the spectrometer foreseen for the Copernicus CO2 Monitoring Mission (CO2M) (Janssens-Maenhout et al., 2020). The remote sensors are supported by several in-situ instruments to measure the main greenhouse gases and related trace species as well an air sampler that collects air samples at flight level for later analysis in the laboratory. Furthermore, instruments to provide detailed information about the standard meteorological parameters (pressure, wind, humidity) will also be on board. In order to link those in-flight data to profiles, the launch of small meteorological sondes is foreseen.
CoMet 2.0 Arctic will be coordinated in conjunction with the Arctic-Boreal Vulnerability Experiment (ABoVE) which is a NASA Terrestrial Ecology Program field campaign conducted in Alaska and Western Canada (Miller et al., 2019). ABoVE is a large-scale study of environmental change and its implications for social-ecological systems focused on gaining a better understanding of the vulnerability and resilience of Arctic and boreal ecosystems to environmental change and providing the scientific basis for informed decision-making to guide societal responses from local to international levels. Both missions, ABoVE and CoMet 2.0, are linked through the transatlantic initiative AMPAC (Arctic Methane and Permafrost Challenge) that has recently been inaugurated by the US and European Space Agencies, NASA and ESA.
References:
Amediek, A., G. Ehret, A. Fix, M. Wirth, C. Büdenbender, M. Quatrevalet, C. Kiemle, and C. Gerbig, "CHARM-F—a new airborne integrated-path differential-absorption lidar for carbon dioxide and methane observations: measurement performance and quantification of strong point source emissions," Appl. Opt. 56, 5182-5197 https://doi.org/10.1364/AO.56.005182 (2017).
Ehret, G., P. Bousquet, C. Pierangelo, M. Alpers, B. Millet, J.B. Abshire, H. Bovensmann, J.P. Burrows, F. Chevallier, P. Ciais, C. Crevoisier, A. Fix, P. Flamant, C. Frankenberg, F. Gibert, B. Heim, M. Heimann, S. Houweling, H.W. Hubberten, P. Jöckel, L. Law, A. Löw, J. Marshall, A. Agusti-Panareda, S. Payan, C. Prigent, P. Rairoux, T. Sachs, M. Scholze, M. Wirth, “MERLIN: A French-German Space Lidar Mission Dedicated to Atmospheric Methane,” Remote Sens. 9, 1052 https://doi.org/10.3390/rs9101052 (2017).
Fix, A., A. Amediek, C. Büdenbender, G. Ehret, C. Kiemle, M. Quatrevalet, M. Wirth, S. Wolff, H. Bovensmann, A. Butz, M. Gałkowski, C. Gerbig, P. Jöckel, J. Marshall, J. Nęcki, K. Pfeilsticker, A. Roiger, J. Swolkień, M. Zöger and the CoMet team “CH4 and CO2 IPDA Lidar Measurements During the Comet 2018 Airborne Field Campaign,” EPJ Web Conferences 237, 03005 https://doi.org//10.1051/epjconf/202023703005 (2020).
Janssens-Maenhout, G., Pinty, B., Dowell, M., Zunker, H., Andersson, E., Balsamo, G., Bézy, J.-L., Brunhes, T., Bösch, H., Bojkov, B., Brunner, D., Buchwitz, M., Crisp, D., Ciais, P., Counet, P., Dee, D., Denier van der Gon, H., Dolman, H., Drinkwater, M. R., Dubovik, O., Engelen, R., Fehr, T., Fernandez, V., Heimann, M., Holmlund, K., Houweling, S., Husband, R., Juvyns, O., Kentarchos, A., Landgraf, J., Lang, R., Löscher, A., Marshall, J., Meijer, Y., Nakajima, M., Palmer, P. I., Peylin, P., Rayner, P., Scholze, M., Sierk, B., Tamminen, J., & Veefkind, P. (2020). Toward an Operational Anthropogenic CO2 Emissions Monitoring and Verification Support Capacity, Bulletin of the American Meteorological Society, 101(8), E1439-E1451. https://doi.org/10.1175/BAMS-D-19-0017.1 (2020).
Miller, C. E., Griffith, P., Goetz, S., Hoy, E., Pinto, N., McCubbin, I., Thorpe, A. K., Hofton, M. M., Hodkinson, D. J., Hansen, C., Woods, J., Larsen, E. K., Kasischke, E. S., and Margolis, H. A.: An overview of ABoVE airborne campaign data acquisitions and science opportunities, Environ. Res. Lett. 14(8), 080201, https://doi.org/10.1088/1748-9326/ab0d44 (2019).
The inability to accurately quantify methane (CH4) emissions across spatial scales has led to large uncertainties in the Arctic CH4 budget and its future contributions to the permafrost carbon feedback. Our analysis of AVIRIS data from the Arctic Boreal Vulnerability Experiment (ABoVE) Airborne Campaigns revealed microtopographic CH4 hotspots in diverse ecosystems across the ABoVE domain [Elder 2020]. We quantified relationships of these CH4 hotspots with extraordinary fluxes and sub-surface permafrost thaw at local scales [Elder 2021] and with geomorphological controls at regional scales [Baskaran 2021; Elder 2021]. In parallel, we developed a novel L-band SAR algorithm to measure bubbles trapped in winter ice to quantify CH4 ebullition in lakes [Engram 2020], giving us unprecedented insights into terrestrial and aquatic CH4 hotspots. The scaling analyses that we have pioneered in the ABoVE domain anticipate the extension of these methods to the pan-Arctic with the launch of the NASA-ISRO SAR mission (NISAR, LRD 2023), NASA’s Surface Biology and Geology mission (SBG, LRD 2028) as well as ESA’s Copernicus expansion missions CHIME (LRD 2028) and ROSE-L (LRD 2028). Similarly, comparisons of our AVIRIS and SAR products with the CHARM-F and MAMAP2D CH4 products to be acquired during the 2022 ABoVE-CoMet 2.0 Arctic campaign in Alaska and NW Canada [Fix 2018] will accelerate science return from the MERLIN (LRD 2027) and CO2-M (LRD 2026) missions. These analyses will provide critical insights into the CH4 component of the permafrost carbon feedback and enable the use of satellites to monitor its trajectory on interannual to decadal time scales.
References
Baskaran, L, CD Elder, DR Thompson, AA Bloom, S Ma, CE Miller, Geomorphological Patterns of Remotely Sensed Methane Hot Spots in the Mackenzie Delta, Canada, Environmental Research Letters (ABoVE Special Collection), Manuscript No.: ERL-112521, accepted
Elder, C. D., Thompson, D. R., Thorpe, A. K., Hanke, P., Walter Anthony, K. M., & Miller, C. E. (2020). Airborne mapping reveals emergent power law of Arctic methane emissions. Geophysical Research Letters, 47, e2019GL085707. https://doi.org/10.1029/2019GL085707
Elder, Clayton D., David R. Thompson, Andrew K. Thorpe, Latha Baskaran, Philip J. Hanke, Stephanie James, Burke Minsley, Neal Pastick, Katey M. Walter Anthony, Charles E. Miller, 2021. Characterizing Extreme Methane Emissions from Thermokarst Hotspots, Global Biogeochemical Cycles, Advance Online: 2 December 2021. https://doi.org/10.1029/2020GB006922
Engram, M., Anthony, K.W., Sachs, T., Kohnert, K., Serafimovich, A., Grosse, G. and Meyer, F.J., 2020. Remote sensing northern lake methane ebullition. Nature Climate Change, 10(6), pp.511-517. https://doi.org/10.1038/s41558-020-0762-8
Fix, A., Amediek, A., Bovensmann, H., Ehret, G., Gerbig, C., Gerilowski, K., Pfeilsticker, K., Roiger, A. and Zöger, M., 2018. CoMet: An airborne mission to simultaneously measure CO2 and CH4 using lidar, passive remote sensing, and in-situ techniques. In EPJ Web of Conferences (Vol. 176, p. 02003). EDP Sciences. https://doi.org/10.1051/epjconf/201817602003
Atmospheric methane concentration has been constantly increasing over the past decades with a record-high growth rate in 2020 since the systematic measurements began in 1984. The measurements suggest a significant increase in atmospheric methane from the high latitudes in 2020 but what role Arctic methane emissions were playing is still unclear. One of the major challenges for resolving this question is the limited understanding of methane fluxes from natural wetlands during the non-growing season, which constitute up to 40% of annual Arctic methane emissions based on ground-based measurements. However, current process-based models largely underestimate the methane fluxes during the non-growing season. This leads to bias in the seasonality of Arctic methane fluxes and thus affects the estimates of Arctic methane budgets given their biased priori distributions in atmospheric inversions. Satellite retrievals are ideal to provide continuous measurements in the spatio-temporal domain. However, current satellite retrievals of column methane concentration by passive instruments, i.e., GOSAT and TROPOMI, are limited by solar zenith angle at high latitudes and unable to make retrievals during the non-growing season. Here we propose an airborne-based remote sensing technique, the High-Altitude Lidar Observatory (HALO) based on the differential absorption lidar (DIAL) and high spectral resolution lidar (HSRL) techniques to measure weighted atmospheric methane column concentrations, aerosol and cloud distributions, and planetary boundary layer heights for the Arctic during the non-growing season. We conduct a preliminary analysis with a dynamic global vegetation model (LPJ-wsl) and an atmospheric inversion model to demonstrate how non-growing season fluxes are missing in the current Arctic methane budget. By comparing with TROPOMI column methane observations, we show how active remote sensing of column methane is needed to enhance our understanding and monitoring of the Arctic methane budget. The new measurement could provide accuracy and sensitivity needed for improving the understanding of Arctic methane budget and serves as an airborne simulator for the Atmospheric Boundary Layer Lidar Pathfinder (ABLE), a cross-cutting active trace gas lidar mission concept aimed at measuring methane and water vapor from affordable space-based platforms.
Vast areas of the Arctic host ice-rich permafrost and with climate warming these permafrost regions become increasingly vulnerable to thaw. This thaw manifest itself first in a slow but gradual deepening of the seasonally thawed active layer (press disturbances) and secondly in a more rapid and local way by the development of thermokarst features (pulse disturbances).Both forms of permafrost degradation have major impacts by changing ecosystem and hydrological equilibria, and impact the Earth system on a global scale by reinforcing climate change with the additional mobilization of organic carbon that was previously stored in the frozen soil. One important thermokarst feature arising from pulse disturbances are retrogressive thaw slumps (RTS). RTSs initiate by the exposure of ice-rich soils with a subsequent thaw and the formation of a steep headwall. During the summer, the ice in the headwall melts which leads to a continuous retreat. This process can mobilize vast quantities of sediments on a time-scale of years. In the context of recent climate warming, an increase in the number and sizes of RTSs in permafrost regions has been found. However, the inter-regional differences in the rates of RTS activity in terms of their magnitude, distribution and controls remain poorly constrained and so are the implications for carbon and nutrient cycles.
For the investigation of landslides in temperate climate zones, frequency distributions and scaling laws of various form have been used to quantify hazards and ecosystem impacts as well as to improve the process understanding of landslide activity. The variability and similarities of these laws in terms of landslides properties and area characteristics have played an important role. The soil properties (ice-content) as well as time-scales (single event vs. polycyclic multi-year retreat) are different for RTSs than for other landslides, but nevertheless the methods used as well as the universality of landslide characteristics could provide valuable insights into RTS drivers and controls. Furthermore, due to the strong spatial variability of soil organic carbon densities as well as RTS activity past model estimates of the impacts of RTSs on the carbon cycle have large uncertainties. Quantifying the induced volumetric change rates and associated RTS frequency distributions and scaling laws as well as the variability across regions have the potential to greatly improve future carbon release rates.
In this presentation we will show results of an analysis where we used digital elevation models generated from TanDEM-X observations to derive volume and area change rates for RTSs across the Arctic. In a first part we compare RTS characteristics based on elevation model differences over a 5-year time-period from winter 2011/12 to winter 2016/17 and contrast 10 study sites (Eurasia: 5, North America: 5), with a total size of 220.000 km^3 and a total of 1853 RTSs in the sample. We found inter-regional differences in mobilized volumes, scaling laws and terrain controls. The distributions of RTS area and volumetric change rates follow a probability density function known from landslides in temperate climate zones with a distinct peak and an exponential decrease for the largest RTSs. We found that the distributions in the high Arctic are shifted towards larger values, than at other study sites. We also analyzed the area-to-volume scaling, which can potentially be used to estimate volumetric changes when only area change measurements are available.
In a second part we will show first results of a study on the northern Taymyr Peninsula, Siberia, where additional to the time-period from winter 2011/12 to winter 2016/17 observations in 2017/18 and 2020/21 are available. This allows us to investigate temporal changes in the RTS activity and scaling laws. Here we found a strong increase in all quantities describing RTS activity. For example the number of RTSs increased from 82 to 1404 RTSs. With the additional usage of higher temporal resolution Sentinel-2 images we could attribute the strong increase to the 2020 Siberian heatwave. Furthermore, with the use of soil organic carbon maps and several model assumptions estimating unknowns like the ground-ice content, we quantified for the first time the amount of soil organic carbon over a large region that is mobilized due to RTS activity.
Our results have the potential to improve the modelling and monitoring of Arctic carbon, nutrient and sediment cycles and can guide future satllite missions and observation stategies.
The climate change is affecting in a dramatic way the Arctic, which is warming faster than any other place in the Earth. How will the melting permafrost change the surface properties and how will the changes affect the potential methane emissions in the Arctic? This is the key question of the ESA-NASA joint initiative AMPAC - Arctic Methane and Permafrost Challenge, launched in 2019. To answer this challenging goal collaboration and coordinated joint efforts between different research communities are crucially needed.
At present, there are large discrepancies in the emission estimates based on bottom-up and top-down techniques. The AMPAC Working group WG.1 focuses on improving the various observations that can contribute to reducing the differences. In this presentation, we discuss the status of the satellite-based methane observations at high latitudes and permafrost regions and recent advances in the algorithms. With this presentation we aim to contribute to the WG.1 by supporting interaction between different observation communities in comparing the observations, evaluating the spatial and seasonal variability of observations, improving the interpretation of the data and in identifying observational gaps.
Thanks to the success of the Copernicus program and the general awareness towards satellite Earth Observation (EO) data, a growing number of cloud-based EO services are now offered on the European and global market for working on and with the available EO data. From the user perspective this is currently creating confusion due to the large number of available services and the lack of comparability between the offers and an inherent risk of vendor lock-in, when selecting offers based on propriety and/or closed source solutions.
Alongside other issues like the growing size of data to handle and computational requirements have led to the development of the openEO API (see https://openeo.org ) since 2017, which already greatly reduces the risk of vendor lock-in, when a sufficient number of back-ends is available. This project was successfully concluded at the end of 2020 and has provided the first version of openEO API, which has been implemented in a growing number of cloud back-ends and three different client libraries, supporting R, Python, and JavaScript users.
Under the umbrella of ESA in form of openEO Platform this work is being continued, and the concepts are further evolved, by introducing new aspects of federating different cloud back-ends that go much further than just offering the same interfaces from different back-ends. openEO Platform has the goal to provide openEO as a service to EO data users, where they can easily access all kinds of data and processing, share results, and potentially offer their own value-added services on top of the platform (see https://openeo.cloud ).
Newly added features include a single sign-on solution, data and process harmonization, integration of commercially offered datasets, shared accounting and billing procedures between the integrated back-ends, marketplace offerings of user generated applications and workflows. Driven by a number of challenging use cases, new processing capabilities are also introduced and defined in openEO, including generation of analysis ready data (on-demand and on-the-fly) following CARD4L recommendations, machine learning, regression modelling, sampling and improved time series modelling.
The federation on top of which openEO Platform is built includes existing and new features, allowing for a much more seamless user experience than previously possible. A comprehensive library of standardized, well-documented processes has been defined and a set of core processes to be supported by each federation member is currently in development. Alignments in the implementation and availability of those pre-defined processes are key for true interoperability in the federation. The same goes for the numerous data collections that are offered in openEO platform from all the currently participating back-ends such as Terrascope, the Earth Observation Data Centre (EODC) and Sentinel Hub via the Euro Data Cube. All data providers adopted metadata defined by the SpatioTemporal Asset Catalog (STAC) and moreover naming conventions of the elements defining this metadata such as collection names and band names are harmonized in the federation. Shared user identity management allows for a single point of entry for users, implemented through EGI-Check-in, which also allows for further integration with the European Open Science Cloud (EOSC).
All newly implemented core components of the federation enable now the development of distributed processing of workflows. Federated back-ends providing required data and processes will then be able to collectively work on larger jobs or complement each other in case of missing data or processing capabilities.
The EU Copernicus programme has established itself globally as the predominant spatial data provider, through the provision of massive streams of high resolution Earth Observation (EO) data. These data are used in environmental monitoring and climate change applications supporting European policy initiatives, such as the Green Deal and others. To date, there is no single European processing back-end that serves all datasets of interest, and Europe is falling behind international developments in big data analytics and computing. This situation limits the integration of these data in science and monitoring applications, particularly when expanding the applications to regional, continental, and global scales.
The C-SCALE (Copernicus - eoSC AnaLytics Engine, https://c-scale.eu) project federates European EO infrastructure services, such as ESA’s Sentinel Collaborative Ground Segment, the Copernicus DIASes (Data and Information Access Services under the EC), and independent nationally funded Earth Observation service providers, and European Open Science Cloud (EOSC) e-infrastructure providers.
The C-SCALE federation capitalises on the EOSC's capacity and capabilities to support Copernicus research and operations with large and easily accessible European computing environments. That allows the rapid scaling and sharing of EO data among a large community of users by increasing the service offering of the EOSC Portal.
By making such a scalable Big Copernicus Data Analytics federated services available through EOSC and its Portal, and linking the problems and results with experience from other research disciplines, C-SCALE helps to support the EO sector in its development and furthermore enables the integration of EO data into other existing and future domains within EOSC and beyond, e.g. the ESA openEO Platform activity (https://openeo.cloud). By abstracting the set-up of computing and storage resources from the end-users, C-SCALE enables the deploying of custom workflows to quickly and easily generate meaningful results. Furthermore, the project will deliver a blueprint, setting up an interaction model between service providers to facilitate interoperability between commercial (e.g. DIAS-es) and public cloud infrastructures.
Pangeo reinvents the concept of ‘platforms’: it is not anymore a determined infrastructure that offers a certain service but rather a floating and adaptable ecosystem of components that offer the same user interface to access, manipulate and process at different levels scientific data, adapting to the underlying resources available. This pan-scale and cross-infrastructure capability can function as the base for authentic open interoperability among different solutions. It keeps users inclusively free to decide where is more convenient for them to explore, prototype, exploit data and eventually, later on, according to their necessity, finalise the production over the most suitable resource. As scalability is based on modularity it offers several advantages over fixed platform setups, in that it a) lowers the entry barrier allowing effectively a single computer to become the platform, b) enables scalability as the capacity of a platform will only depend on the power of the incorporated components, and c) paves the way for truly open federated platforms as any compatible component or module can join independently of others and bring its assets.
Breaking the barriers of confinement that other solutions impose will define a situation where firms will have to attract users not with the more advanced interface, but with the most convenient solution money-wise. Moreover, using an open-source, community-driven platform will not concentrate on a specific competitor's capability to drive the market but will let anyone participate in the definition and adaptation to future necessities.
As one of Pangeo pillars is the portability of the platform It is open for a large variety of scenarios with the possibility to run it in a securely confined solution or promoting the reproducibility of analysis following the OpenScience paradigm. Both of these worlds will share the same approach that will then benefit from the increase of stakeholders and consequently development capacities. The same strategy can be found in the openness to different geospatial domains where the lack of a priori focus on a specific domain opens the possibility to cross domain cooperation.
As the entire project is driven by an ensemble of components, where for each there is already a community that maintains, documents and plans its evolution. The role of the Pangeo community is focused on acting as a coordinating body between these communities and in covering more comprehensive aspects of scientists and engineers, software and computing infrastructure. Having an already well-organised open community is the key to not reinventing the wheel and decentralize the decisions for the future.
To prove the value of the concept,, the platform is already in production phase over different infrastructure (among other AWS, Google Cloud, 2i2c, JASMIN platform, CNES HPC, IFREMER and many others over almost all the continents) that shape a federation of Pangeo deployments where users are able to move from one to another without any constraint. Moreover Pangeo shares many underpinning features of a multitude of platforms that are appearing on the market allowing to define a minimum requirement level to be considered part of the Pangeo project, allowing the possibility to expand the capability according to the specific necessities.
A decentralized approach, flexibility and openness are key to win the challenges that we can’t foresee today and to create a platform that can be maintained even in mutated scenarios that can't be yet envisioned but that can be influenced by what we are building.
CNES EO Data & Services platform and its integration in French Earth System Research Infrastructure ‘Data Terra’ CNES has started the development of a unified portal of all its Earth observation data with the objective of better serving its users, in particular for transdisciplinary applications. This platform will be based on a common technical base CNES 'Platform' already under development. The first version is expected at the end of 2022. This platform will propose a single point of access to CNES EO data. It will include a knowledge portal allowing users to discover datasets outside their usual theme, to access all resources (documents, software, training, publications) to facilitate their reuse. It will also offer an access portal allowing advanced distribution of EO data (downloading, interactive processing, Earth Analytics Labs (eg: PANGEO notebooks, Nocode interfaces, different flavors of datacubes, ...). The implementation of this platform will be accompanied by an improvement of the data management practices (generalization of DMPs, uniform application of a CNES data policy, …). Given the very large volume of EO data (several tenth of PB), this platform will promote the move of processing to the data platform. Thus, in particular through Earth Data Labs, users will be able to develop their algorithms and their processing chains in a context that will facilitate scaling up and switching to operational mode. Thus, the treatments will no longer be carried out on the user's machine but on efficient infrastructures from an energy point of view. The first of these is the CNES computing center. But the treatments, as the case may be, can take place at external HPC / HPDA centers (for example national HPCs or the future Exascale of EuroHPC). There are also plans to move closer to public clouds to better serve the private sector. This EO Data & Services Platform is integrated in a bigger one: Data Terra. Data Terra is a research infrastructure dedicated to Earth System observation data. Created in 2016, it falls within the French Ministry for Higher Education, Research and Innovation (MESRI) national roadmap. It mobilizes more than 170 Full Time Equivalent (FTE)/Year, distributed over more than 400 people from the 19 partner organizations (CNRS, CNES, IFREMER, IRD, IGN, BRGM, …). This research infrastructure is based on four data hubs covering each of the major compartments of the Earth System: land surfaces (THEIA), atmosphere (AERIS), oceans (ODATIS) and Solid Earth (FORM@TER. Each data hub aims to facilitate access to satellite, airborne and in-situ data acquired and managed by research laboratories or federative structures (Universe Science Observatories (OSU), Research Federations, …), by national infrastructures such as National Observing Services (SNO), Environmental Research Observation and Experimentation Systems (SOERE), and by the oceanographic fleet, aircraft, balloons and space missions. Data Terra is a distributed platform (more than 30 Data and Services Centers). Its backbone is made up of 8 mains sites (Brest, Grenoble, Lille, Montpellier, Orléans, Paris, Strasbourg, Toulouse) linked by a high-performance network (GEANT / RENATER) and grid technology (eg iRODS or Rucio). The CNES platform is one of them. As with the CNES platform, Data Terra's ambition is to promote transdisciplinary work, beyond the compartments of the Earth system. This to address complex societal demand such as climate change and its adaptation, natural risks, coastal area monitoring and modelization, ...). It will offer the same type of advanced services for data access (visualization, Earth analytics labs, systematic processing of data). Data Terra also aims to be integrated into the data ecosystem in Europe: ENVRI, EOSC, Destination Earth, GAIA-X. And also at the international level: GEO, CEOS, RDA, ...
NASA EOSDIS represents the largest Open Data holdings of Earth Observation (EO) data, currently at nearly 60PBs and expected to grow to over 150PBs over the next several years. As part of NASA’s Open Science, and more specifically the Open Source Science initiatives, NASA EOSDIS has sought out means of improving the discoverability, access, and use of our EO data.
Starting in 2019, NASA began extending download-oriented, on-premises data hosting to directly accessible cloud hosting options. By leveraging cloud native data formats such as Cloud Optimized GeoTIFFs (COGs), cloud optimized metadata and access extensions like OPeNDAP’s DMR++ and Zarr, and community standards based metadata including Spatio-Temporal Asset Catalogs (STAC), NASA’s EOSDIS data is now more readily available and accessible than ever before.
Use of NASA data, for both research and direct application, frequently exploits multiple products, potentially from multiple organizations, in combination with locally collected data. While challenges of data analysis such as alignment (projections, grids, etc.), QA application, and format conversions are not new to this space, the direct accessibility of cloud hosted data and compute near the data provides previously unachievable patterns for data and platform interoperability. Open source tooling, such as Pangeo, Dask, and XArray, coupled with an appropriately designed geospatial data lake and supporting infrastructure, now make multiproduct, highly scalable research and analysis possible.
This talk explores the technologies, interaction patterns, opportunities, and challenges associated with embracing cutting edge community standards and how NASA offers open, high performance access-in-place capabilities for its Earthdata Cloud petabyte-scale geospatial data lake.
We present a new statistical analysis of small-scale (sub-decameter) plasma density irregularities in the topside ionosphere (325-1500 km altitude), using the high-cadence (1000 samples/sec) plasma current data from the imaging and rapid-scanning ion mass spectrometer (IRM) onboard Swarm Echo (Swarm-E) in the first seven years of the mission (2014-2020). IRM is one of the instruments in the Enhanced Polar Outflow Probe (e-POP) scientific payload on Swarm-E, and it measures low-energy ions (0.1-90 eV/q; 1-60 AMU/q) in the vertical plane of the spacecraft velocity (the vertical-ram plane) and resolves the mass-per-charge (M/q), energy-per-charge (E/q) and incident direction of each detector ion using time-of-flight (TOF) and hemispherical electrostatic analysis. In addition, it measures simultaneously the incident plasma (ion and electron) current on the sensor surface. The measurements produce a two-dimensional velocity phase space distribution for each major ion species and an overall ion composition distribution every 16 msec, and the total net plasma current every 1 msec. Using the latter, we present the statistical distributions of small-scale plasma density irregularities and their spectral characteristics down to sub-100 m scale, as well as their altitude, magnetic latitude, and magnetic local time dependences and their variations with solar and geomagnetic activities.
The TOLEOS project will provide new thermosphere density and crosswind observations derived from the accelerometer data of the CHAMP, GRACE, and GRACE-FO missions. The accurate calibration of the accelerometer data and the upgrade of the radiation pressure model are key elements of the project, which is funded by the Swarm Data, Innovation, and Science Cluster (Swarm DISC). To improve the radiation pressure modelling, we use ray tracing techniques in combination with high-fidelity geometry models of the satellites, which were augmented with the thermo-optical properties of the surfaces. This substantially reduces the uncertainty stemming from the satellite geometry modelling and shadowing effects. In addition, we introduce thermal models of the satellites to account for the radiation of heat from the satellites themselves. We will elaborate the accelerometer data calibration and briefly explain the upgraded radiation pressure modelling. Further, we will compare the new thermosphere density and crosswind observations to the existing observations to highlight the differences and demonstrate the effects of the upgraded processing.
Thermospheric neutral winds play an important role in the transport of momentum and energy in the upper atmosphere and affect the composition, dynamics and morphology of the ionospheric plasma. Although the general morphology of the winds is well understood, we are only starting to understand it’s variability. During the last decade it has become inherently clear that the lower atmosphere is an important driver of thermospheric variability, which can, for example, be due to direct penetration of waves from the lower atmosphere into the ionosphere/thermosphere, secondary waves generated on the way, or internal feedback mechanisms in the coupled ionosphere-thermosphere system. Therefore, an understanding of thermospheric variability and its causes is critical for an improved understanding of the coupled ionosphere-thermosphere system and the lower atmosphere. The Gravity Field and Steady-State Ocean Explorer (GOCE) mission provided cross-track (zonal) neutral winds at an altitude of around 260 km from November 2009 to October 2013. Due to the very small orbital precession of the satellite’s orbit in local time (LT), GOCE produced a large data set of zonal winds in the dawn-dusk sectors without the LT−season ambiguity intrinsic to many satellites. We have used GOCE zonal wind observations from low- to mid-latitudes obtained during geomagnetically quiet times to investigate the inter-annual, seasonal and spatial zonal wind variability near dawn and dusk. The temporal and spatial variability is presented as a variation about the zonal mean values and decomposed into its underlying wavenumbers using a Fourier analysis. The obtained wave features are compared between different years and different seasons. It is found that a significant part of the observed variability can be explained as due to waves up to wavenumber 5 and a clear interannual progression of the individual wave components can be observed. The obtained wave features will be compared and contrasted with model results to elucidate their underlying tidal components.
The Soil Moisture and Ocean Salinity (SMOS) is an Earth Explorer (EE) European Space Agency (ESA) mission launched on 2 November 2009 in excellent operational status with plans to continue its operational phase beyond 2022. The SMOS mission originally designed to perform global observations of soil moisture over land and salinity over oceans has however gone beyond its original scientific objectives demonstrating its suitability for new real time application purposes such as sea wind speed estimation for hurricane tracking or measuring thin sea-ice thickness in the polar seas.
The payload of SMOS consists of the Microwave Imaging Radiometer using Aperture Synthesis (MIRAS) instrument, a passive microwave 2-D interferometric full polarization radiometer, operating at 1.413 GHz (wavelength of 21 cm). Because of the SMOS Sun-synchronous orbit geometry and the size of the MIRAS’s antennae, the Sun appears in the field of view and a direct Solar radio observation is captured in the image and removed as a source of interference during the data ground processing.
Within this section, we want to describe our work to further process the removed Sun signal in order to derive a calibrated Solar flux measurements from such interference. The processor is applied to the latest version of the SMOS dataset (baseline V7, available since May 2021) and it is suitable for a near real time delivery of such L-band Solar flux information.
We also present the result of our inter-comparison between the derived SMOS solar flux and the radio-telescope measurements form US Air Force Radio Solar Telescope Network (RSTN) and from intercalibrated Nobeyama Radio Polarimeter of the National Astronomical Observatory of Japan for the entire solar cycle 24.
The validation results for the SMOS solar flux show a very good correlation with the on-ground measurements and the capability of the SMOS measurements to follow the increase of the mean solar activity for the solar cycle 24 during a period of transition from quiet to active Sun as well as to capture the impact of Solar rotation in the microwave flux.
Based on this validation result, we discuss how the SMOS mission can provide a valuable contribution in collecting and delivering in near real time, solar flux measurements for space weather application and navigation application in a frequency band between 1-2GHz which is used by Global Navigation Satellite Systems (GNSS), flight radars and wireless communications, among others. All these services are known to be affected by intense Solar activities, timely information on L-band solar radio bursts (SRBs) can contribute to a better modelling or monitoring of anomalies in the aforementioned frequency range.
As sufficiently recognised in the literature, the quality of SAR measurements may be affected
by the propagation of the radar signals through the ionosphere. The introduced propagation delays and the dispersive nature of the ionosphere may cause strong geolocation errors, defocussing in
range and azimuth in the radar images, as well as the local rotation of the polarisation reference of
fully-polarimetric acquisitions. The impact of the ionosphere is more critical for lower frequencies
and higher bandwidths.
These effects have been assessed for ESA’s Earth Explorer Biomass mission [1], which will be
the first P-band SAR in space and is expected to be launched in 2023. The baseline approach for
the estimation of ionospheric perturbations in Biomass consists of exploiting the Faraday rotation
estimates provided by the Bickel and Bates algorithm [2]. This approach necessitates very accurate
Faraday rotation estimates (e.g., typically better than one tenth of a degree) if they are to be used
for correcting the phase history of the data and not the depolarisation alone. Such high accuracies
typically require high averaging and low-pass estimates which might be incompatible with strong
scintillations scenarios.
As part of the Ground Processor Prototype (GPP) of the mission [3], we are developing an
autofocus algorithm for the recovery of ionospheric phase signatures which can handle such strong
scintillation cases. To support this development, we have enhanced the Biomass end-to-end performance simulator (BEEPS) [4] with tailored ionospheric and scene generators. The scene generator
of BEEPS is extended to use real spaceborne SAR reflectivity images (e.g., Sentinel-1) which provide
similar coverage and realistic contrast, essential for the tuning of the autofocus. For the ionospheric
generation, BEEPS is able to create thin-layer realizations including background and turbulent contributions. The incorporation of the background part (based on the NeQuick2 [5]) in the development
environment is essential for the characterization of integration errors in azimuth. The turbulent part
is based on the well-known Rino’s power law [6]. The superposition of the background and turbulent
components is incorporated in the simulated data as locally-variant phase and delay perturbations,
as well as Faraday rotation.
The classical references on autofocus are typically targeted on the recovery of the contrast of
the image, only minorly worrying about the fidelity of the phase of the images after the correction
[7]. A phase gradient autofocus approach for Biomass was suggested in [8] to mitigate the effect of
ionospheric irregularities along the synthetic aperture. This approach has the limitation of requiring
the presence of point-like targets within the image, which makes it a difficult choice for operational
environments. We propose in this paper a combined approach based on a map-drift kernel [7] and
therefore capable of delivering robust phase error estimates over extended areas in the absence of
point-like targets, while at the same time integrating the information of any point-like or coherent
scatterer present in the image [9] with the purpose of locally improving the estimation accuracy. Due
to the similarity of the phase perturbations for all polarimetric channels, the suggested algorithm
integrates the autofocus estimates of all four polarimetric channels into a single inversion step,
which can be also supported by the residual Faraday rotation estimates as postulated in [10]. An
assessment of the usefulness of estimates of the dispersion in the integration step of the autofocus
will be provided in the final version of the paper. In the paper we will also show how the algorithm
uses the residual errors introduced after each iteration of the algorithm to optimally generate the
ionospheric phase error estimates.
References
[1] Rogers, Neil C., et al., "Impacts of ionospheric scintillation on the BIOMASS P-band satellite
SAR." IEEE Transactions on Geoscience and Remote Sensing 52.3 (2013): 1856-1868.
[2] Kim, Jun Su, et al., "Correcting distortion of polarimetric SAR data induced by ionospheric
scintillation." IEEE Transactions on Geoscience and Remote Sensing 53.12 (2015): 6319-6335.
[3] Prats-Iraola, Pau, et al., "The BIOMASS ground processor prototype: An overview." EUSAR
2018; 12th European Conference on Synthetic Aperture Radar. VDE, 2018.
[4] Sanjuan-Ferrer, Maria Jose, et al., "End-to-end performance simulator for the BIOMASS mission." EUSAR 2018; 12th European Conference on Synthetic Aperture Radar. VDE, 2018.
[5] Nava, B., P. Coisson, and S. M. Radicella. "A new version of the NeQuick ionosphere electron
density model." Journal of Atmospheric and Solar-Terrestrial Physics 70.15 (2008): 1856-1862.
[6] Rino, C. L. "A power law phase screen model for ionospheric scintillation: 1. Weak scatter."
Radio Science 14.6 (1979): 1135-1145.
[7] Carrara, Walter G., R. S. Goodman, and Rd M. Majewski. "Spotlight synthetic radar: signal
processing algorithms." Artech House (1995).
[8] Li, Zhuo, et al., "Performance analysis of phase gradient autofocus for compensating ionospheric
phase scintillation in BIOMASS P-band SAR data." IEEE Geoscience and Remote Sensing
Letters 12.6 (2015): 1367-1371.
[9] Dexin Li. "Research on the technology of sinal processing and simulation of geosynchronous
SAR". PhD Thesis.
[10] Gracheva, Valeria, et al., "Combined Estimation of Ionospheric Effects in SAR Images Exploiting Faraday Rotation and Autofocus." IEEE Geoscience and Remote Sensing Letters (2021)
Ionized plasma in the high-latitude ionosphere-thermosphere is at constant motion. Plasma flow in the ionospheric F region is driven by magnetosphere-ionosphere coupling and the original energy source stems from the coupling between the solar wind and the magnetosphere. Typically, two convection cells are formed in the high-latitude ionosphere, one centered on the dusk and one on the dawn side. Several statistical models exist, which estimate the flow velocities based on solar wind properties and geophysical conditions.
However, at times very large plasma flow velocities have been measured, which exceed the statistical average velocity values several times, up to tenfold. The horizontal spatial extent of these strong flows is expected to be relatively narrow, and at the moment their duration is unclear. To study those events, we utilize the following measurements. Swarm is ESA's Earth Explorer 5 mission and a constellation of three satellites in circular polar orbits at 450-515 km altitudes. Swarm makes highly accurate measurements of Earth's magnetic field since November 2013 and carries several other instruments as well, which provide us with information about the local plasma environment and the field-aligned electrical currents flowing into/away from the ionosphere. The EISCAT incoherent scatter radars, the UHF and VHF radars in Tromsø (69.6oN latitude) and the EISCAT Svalbard radar (ESR, 79oN latitude) near Longyearbyen, have been operational for several decades. EISCAT radars yield electron density, electron and ion temperature and ion drift measurements from a large altitude range in the ionosphere.
In this study, we carry out a search of conjugate Swarm satellite Thermal Ion Imager (TII) measurements over the EISCAT Tromsø and Svalbard radars to study the characteristics of high-speed plasma flows and to confirm their existence by two independent measurements. High plasma speeds produce increased ion-neutral frictional heating, which can be seen as an increase of ion temperature, measured by the EISCAT radar. Increased ion temperature affects the chemistry of the ionosphere, and the flow channels play a role in the electrodynamical magnetosphere-ionosphere coupling. By combining satellite and ground-based observations, we will be able to get information about the physical processes, spatial and temporal scales, as well as geomagnetic conditions of frictional heating events. These events are interesting also from the space weather perspective, because heating of the atmosphere may produce increased satellite drag at low-Earth orbits (LEO).
Wine grapes and almonds are some of the most important specialty crops produced in California and are largely irrigated. Acreage in these woody perennials continue to expand while at the same time, water availability in California poses a significant challenge to meeting the competing needs of agriculture, municipalities, and the environment. It is also an issue that is already reaching critical levels given the recent protracted California drought in 2012-2016 and the extreme drought in 2021 that saw historic drawdown of major reservoirs in California and the western US. With climate change, it is projected to become a still greater problem in the coming years. The need for monitoring crop water use or evapotranspiration (ET) is critical in maximizing water use efficiency for irrigated agriculture. The GRAPEX (Grape Remote sensing Atmospheric Profile Evapotranspiration eXperiment) project’s major goal has been to refine and apply a multi-scale remote sensing ET toolkit for mapping crop water use and crop stress for improved irrigation scheduling and water management in vineyards in the Central Valley of California. The plan is to provide the ET toolkit output to wineries and eventually to orchard growers throughout the state of California for improving water management and irrigation scheduling through the OpenET platform. Data sources, models, and technologies include earth observations from GOES, VIIRS, MODIS, Sentinel and Landsat satellites together with unmanned aerial vehicles (UAVs) applied to energy balance modeling techniques utilizing land surface temperature. We have also recently been evaluating a spectral-based ET approach using Sentinel-2 data for increasing the frequency of high resolution ET mapping. Model results have been validated with biophysical, soil moisture and micrometeorological measurement of fluxes from leaf to canopy to whole vineyard blocks at selected experimental vineyards. The results of GRAPEX and the application of the ET toolkit for irrigation scheduling will be discussed along with the successes and remaining issues that need to be resolved in order to have an operational earth observation system for irrigation management of perennial crops.
The world’s population is expected to increase by 2 billion people by 2050. To feed these additional people, a commensurate increase in food and water demands is expected. However, our current hydric and agricultural systems are already under unprecedented stress, which is projected to increase due to climate change. Remote sensing technologies such as satellite observations can help to optimize a farm’s resources and maximize yield precision when integrated into precision agriculture strategies. These strategies necessitate accurate and precise information on crop health and water use at high spatiotemporal resolutions (i.e., daily at less than 10 m). Nevertheless, until a few years ago, using satellite observations for precision agriculture was not feasible. There was a compromise between retrieval frequency and achievable spatial resolution. That is, it was possible to get either high spatial resolution images occasionally or frequent coarse spatial resolution observations. The development and mass launching of nanosatellites (i.e., CubeSats) that use off-the-shelf components has relaxed such constraints. Yet, no single satellite system has achieved the spatiotemporal resolution and data quality required to drive precision agriculture insights. Data fusion approaches can circumvent this limitation by leveraging the synergies between existing satellite platforms. In particular, Planet’s CubeSat constellation of over 180 satellites is well suited for closing the spatiotemporal resolution gap when combined with rigorously calibrated observations from traditional satellite platforms (e.g., Sentinel-2, Landsat 8, MODIS). Planet Fusion represents a novel data fusion approach for producing daily analysis-ready surface reflectance (SR) data (i.e., data that is ready to use for quantitative applications) at 3 m spatial resolution by leveraging multi-sensor (PlanetScope, Sentinel-2, Landsat 8, MODIS, VIIRS) data products. Here, we show the potential of combining Planet Fusion SR data with traditional Earth Observation data (e.g., Landsat, Sentinel, ECOSTRESS) to produce daily evaporation maps at 3 m spatial resolution in the context of precision agriculture. The results are evaluated using latent heat flux measurements from the AmeriFlux and FLUXNET eddy covariance network for a range of crop types (e.g., alfalfa, maize). The study demonstrates the advantages and additional information gained from the daily crop water use product compared with coarser spatiotemporal products (e.g., Landsat 8, Sentinel-2).
Earth Observation-based modeling of evapotranspiration, on a daily scale, requires high temporal revisit time. Although the integrated use of the Sentinel-2 and Landsat 8 sensors offer high temporal coverages (< 5 days), their utility in modeling evapotranspiration (ET) should be evaluated, taking into account their spectral and spatial differences.
In this study, an ET modeling framework, based on the combination equation models of Penman-Monteith (PM) and Shuttleworth-Wallace (SW) has been developed with special regard to sparse canopy conditions.
The combination equation models use spectral data in the visible, near-infrared, and shortwave infrared of Operational Land Imager (OLI) and MultiSpectral Instrument (MSI) sensors to infer the input data required for characterizing the crop surface in the calculation of evapotranspiration by means of a modified combination equation as proposed by [1]. This modified method is based on a modulation of surface resistances in the combination equation for ET by incorporating SWIR data in the assessment of the water status of the canopy and soil ensemble.
This approach is being used in the COALA project, funded by the Horizon 2020 program of the European Union with the aim of developing Copernicus Earth Observation-based information services for irrigation management in Australia, building on consolidated experience of past EU projects and existing operational irrigation advisory services (https://www.coalaproject.eu/).
To evaluate the impact of the different spectral and geometric characteristics of Sentinel-2 and Landsat-8 data, a set of coincident acquisitions have been processed respectively from SCIHUB (Level L2A) and USGS Earth Explorer (C2L2 and C2L1).
The surface parameters required as input in these models (hemispherical shortwave albedo, Leaf Area Index (LAI), and soil-canopy water status) have been properly derived from both datasets.
Particularly, the LAI data has been derived by using the last release of the Sentinel Application Platform toolbox (SNAP 8.0), where the “Biophysical Processor” tool provides three different sets of coefficients, specific for Sentinel-2A, Sentinel-2B, and Landsat-8. More specifically, LAI is estimated from the inversion of the radiative transfer model PROSPECT+SAILH based on Artificial Neural Network, thus taking full advantage of the spectral resolution [2].
Cross-comparison with flux tower observations, acquired over California vineyards during the USDA GRAPEX experiment (Grape Remote-sensing Atmospheric Profile and Evapotranspiration eXperiment) in California irrigated vineyards [3] has been done for evaluating the performance of the proposed approaches.
The current research envisages new operational perspectives in the utilization of the virtual constellation composed by the two Sentinel-2 and the two Landsat platforms, also considering the availability of the Harmonized Landsat and Sentinel-2 (HLS v2.0) dataset.
[1] D’Urso G., Falanga Bolognesi S., Kustas W.P., Knipper K.R., Anderson M.C., Alsina M.M., Hain C.R., Alfieri J.G., Prueger J.H., Gao F., McKee L.G., De Michele C., McElrone, A.J., Bambach-Ortiz, N., Sanchez L.A., & Belfiore O.R. (2021). Determining Evapotranspiration by Using Combination Equation Models with Sentinel-2 Data and Comparison with Thermal-Based Energy Balance in a California Irrigated Vineyard. Remote Sensing, 13(18), 3720.
[2] Weiss, M.; Baret, F. S2ToolBox Level 2 Products: LAI, FAPAR, FCOVER.
[3] USDA. GRAPEX. Available online: https://www.ars.usda.gov/northeast-area/beltsville-md-barc/beltsville-agricultural-research-center/hydrology-and-remote-sensing-laboratory/docs/grapex/grapex-home/
WaterSENSE: Water Use Monitoring and Assessment Services
The Murray-Darling Basin in Australia, the initial focus for the H2020 WaterSENSE project, produces 39% of the country’s agricultural output, uses as much as 66% of its water for irrigated agriculture, has over 9200 irrigation businesses and an agriculture industry worth AUD 24 billion annually (15 billion euro). The freshwater systems of the Darling were listed as drought-endangered in 2018 , with a significant estimated loss in production and the loss of 6000 jobs. Failure to address excess diversions upstream resolutely threatens the viability of the river, the fish, and the communities that depend on the river for their livelihoods and wellbeing .
Lack of capacity to proactively monitor and identify landscape and hydrological changes, including allegations of water theft, have recently made headline news across Australia. This has triggered a comprehensive set of reforms in operational improvements for water management across New South Wales and renewed efforts to improve water availability and use management, regulation, compliance and enforcement.
Unfortunately, the massive areal coverage makes this monitoring challenging and there is limited access to data and tools that allow for the state-wide operational monitoring of water consumption. Traditional methods to monitor compliance (e.g. installation of water meters) take years to implement and involve high costs and human resources to maintain. There is therefore an urgent need for accurate, inexpensive, and rapid monitoring tools to get an improved insight in the water use.
WaterSENSE addresses this challenge by developing water-monitoring capabilities able to support effective water management in areas ranging from irrigated fields and districts to entire river basins. The goal of WaterSENSE is to develop a modular, operational, water-monitoring system built on Copernicus Earth Observation data in order to provide water managers with a toolbox of reliable and actionable information on water availability and water use in support of sustainable water management and transparency across the entire water value chain.
Water Use Monitoring and Auditing Services (WUMAS)
Central to WaterSense is the Water Use Monitoring and Auditing Services (WUMAS). It provides water use monitoring information at different time and spatial resolutions, based on Copernicus EO data, hydrological models and local data. Its modular design and configurability to user needs and circumstances sets the stage for our research and innovation action. Our research is focused on different technology elements and its integration into a single system. The information is made available in online dashboards through the HydroNET platform, a SaaS decision support system for water managers, where it compares the estimated irrigated water use with water permits.
During the session we will present our EO related research activities that are focused on providing insight into irrigated water use. Based on satellite information and (in-situ) weather data, high-resolution (10 m) remote sensing-based evapotranspiration (ET) data is calculated using the ETLook energy balance model. This data is used in a novel way to estimate irrigation water use by comparing the ET of irrigated agricultural pixels to the weighted average ET of a subset of natural Hydrological Similar Pixels. The accuracy of the results depends also on the accuracy of other (EO based) information source. We will present our research involving EO data in relation to:
• Improved rainfall and weather information
• Irrigated land use detection
• Farm dam volume change indication and estimation
We will show the demonstration results of the service, which takes place from January until December 2022 covering the Namoi Catchment in New South Wales, and our experience with parts of the service in the Netherlands and South Africa.
References:
Bastiaanssen. W. G. M., Cheema. M. J. M., Immerzeel. W. W., Miltenburg. I. J., and Pelgrum. H. (2012). Surface energy balance and actual evapotranspiration of the transboundary Indus Basin estimated from satellite measurements and the ETLook model. Water Resources Research 48, no. 11.
Brombacher, J., Rezende de Oliveira Silva, I., Degen, J., Pelgrum, H. (submitted) A Novel Evapotranspiration Based Irrigation Quantification Method Using the Hydrological Similar Pixels Algorithm
Einfalt, T., Lobbrecht, A. (2012). Compositing international radar data using a weight-based scheme. IAHS Publ. 351, p.20 - 25.
FAO (2018). WaPOR Database Methodology: Level 1. Remote Sensing for Water Productivity Technical Report: Methodology Series. Rome, FAO. 72 pages. Licence: CC BY-NC-SA 3.0 IGO.
FAO and IHE Delft. (2019). WaPOR quality assessment. Technical report on the data quality of the WaPOR FAO database version 1.0. Rome. 134 pp
Fuentes, I., Scalzo, R., Vervoort, R.W. (2021). Volume and uncertainty estimates of on-farm reservoirs using surface reflectance and LiDAR data. Environmental Modelling & Software Volume 143, September 2021, 105095
Hunink. J.E., Contreras. S., Soto-García. M., Martin-Gorriz. B., Martinez-Álvarez. V., Baille. A. (2015). Estimating groundwater use patterns of perennial and seasonal crops in a Mediterranean irrigation scheme, using remote sensing. Agricultural Water Management, Volume 162, Pages 47-56.
Lobbrecht, A., Einfalt, T., Reichard, L., Poortinga, I. (2012) Decision support for urban drainage using radar data of HydroNET-SCOUT. IAHS Publ. 351, p.626 - 631.
Pelgrum. H., Miltenburg. I.J., Cheema. M.J.M., Klaasse. A., Bastiaanssen. W.G.M. (2011). ETLook: A novel continental evapotranspiration algorithm. Remote Sensing and Hydrology, Jackson Hole, Wyoming, USA.
Strehz, A., Einfalt, T., Alderlieste, M. (2019) HydroNET-SCOUT – Ein Webportal zum Zugriff auf qualitätsgeprüfte Niederschlagsdaten. Tag der Hydrologie 28./29.3.2019, Karlsruhe.
van Eekelen. M.W., Bastiaanssen. W.G.M., Jarmain. C, Jackson. B., Ferreira. F., van der Zaag. P., Saraiva Okello. A., Bosch. J., Dye. P., Bastidas-Obando. E., Dost. R.J.J., Luxemburg. W.M.J. (2015). A novel approach to estimate direct and indirect water withdrawals from satellite measurements: A case study from the Incomati basin. Agriculture, Ecosystems & Environment, Volume 200, Pages 126-142.
Yang. Y., Guan. H., Long. D., Liu. B., Qin. G., Qin. J., Batelaan. O. (2015). Estimation of Surface Soil Moisture from Thermal Infrared Remote Sensing Using an Improved Trapezoid Method. Remote Sens. 2015, 7, 8250-8270.
Tensions for water resources between competing water usages are a common issue in many regions of the world. Agricultural use accounts for a large share of the water demand and is key for food security and socio-economic stability in rural areas. Hence, ensuring an efficient use of irrigation water is a common goal for water policies. On the other hand, managing irrigation in farms is not a trivial task since the water requirements by crops are site-specific and vary in time. In this context, the availability of EO data opens the opportunity to develop tools for the supervision and management of irrigation, scalable from farms to districts and basins. Time series of observed biophysical parameters of the vegetation and estimates of actual crop evapotranspiration (ETa) are promising resources for these applications. Assimilation of those data on models of crop development and soil water balance would enhance their ability for assessing irrigation performance and for making management decisions. Here we describe an approach based on digital twins that assimilate EO data and simulate the water balance of the soil-crop system at each individual plot. The goal is to obtain a dynamic view of irrigation performance scaling from individual plots to the basin, quantifying at real time the progress of crop growth and seasonal water balance, including forecasts of the forthcoming water demand. This approach is being implemented in the lower Ter River basin, on an area of 675 km2 covering 41 municipalities. A separate digital twin was defined for each of over 17000 agricultural plots listed in the Land Parcel Identification System. On each of these digital twins, the agricultural scenario was set according to open data of EU CAP’s Single Farm Payment and a soil map of the area. This included the list of crops declared from 2015 to 2021, the irrigation method and the soil class. From these basic categoric data, more detailed parameters of the crop, soil and irrigation method were assigned according to the description of actual agricultural scenarios on the area. The development of the crop and its soil water balance at each individual plot is simulated at real time, using a customized model based in a rationale similar to FAO’s AquaCrop, with additional adaptations to permanent crops, localized irrigation and discontinuous canopies. Simulations are updated every day, using online weather data from the Meteorological Service of Catalonia. In parallel, as soon as new Sentinel-2 images are available, fAPAR and LAI are computed through the Biophysical Processor available in the SNAP software and these parameters are assimilated in the model. The output are maps and time series with the estimated ETa, irrigation and available soil water at each plot. The maps are updated daily. Time series cover the whole year, on a week basis, including the forecasts for the remaining part of the year.
The conflicting use of water is becoming more and more evident, also in regions that are traditionally rich in water. With the world’s population projected to increase to 8.5 billion by 2030, the simultaneous growth in income will imply a substantial increase in demand for both water and food. Climate change impacts will further stress the water availability enhancing also its conflictual use. The agricultural sector is the biggest and least efficient water user, accounts for around 24% of total water use in Europe, peaking at 80% in the southern regions.
The objective of this study is then to improve farm and irrigation district water use efficiency developing an operational procedure for parsimonious irrigation, optimizing the irrigation water use and relative water productivity, according to different agronomic practices supporting different level of water users.
The SMARTIES optimization irrigation strategy, based on soil moisture (SM) and crop stress thresholds, has been implemented the Chiese (North Italy) and Capitanata (South Italy) Irrigation Consortia. The system is based on the energy-water balance model FEST-EWB (Flash–flood Event–based Spatially–distributed rainfall–runoff Transformation- Energy Water Balance model), which is a pixel wise calibrated model with remotely-sensed land surface temperature (LST), with mean areal absolute errors of about 3 °C, and then validated against local measured SM and latent heat flux (LE) with RMSE values of about 0.07 and 40 Wm−2.
Optimized irrigation volumes are assessed based on a soil moisture thresholds criterion, allowing to reduce the passages over the field capacity threshold reducing the percolation flux with a saving of irrigation volume without affecting evapotranspiration and so that the crop production. The implemented strategy has shown a significative irrigation water saving, also in this area where a traditional careful use of water is assessed.
The effect of the optimization strategy has been evaluated on the reductions of irrigation volumes and timing, from about 500 mm over the crop season in the Capitanata area to about 1000 mm in the Chiese district, as well as on the cumulated drainage and ET fluxes. The irrigation water use efficiency (IWUE) indicator is found to be higher when applying the SIM strategy than with the traditional irrigation strategy: of about 35 % for tomatoes fields in the South of Italy and of 80 % for maize fields in the North of Italy.
The activity is part of the European projects RET-SIF (www.retsif.polimi.it) and SMARTIES (www.smarties.polimi.it).
MetOp-Second Generation is a follow-on system to the first generation series of MetOp (Meteorological Operational) satellites, which currently provide operational meteorological observations from polar orbit. It is part of the EUMETSAT Polar System Second Generation (EPS-SG). From a space segment perspective, MetOp-SG consists of two series of satellites, i.e. Satellite-A and Satellite-B with three satellites of each series. The aim of the mission is to ensure continuity of the essential operational meteorological observations from polar orbit - in the 2022-2042 timeframe - to improve the accuracy, resolution, dynamic range of the measurements and to provide new measurements/instruments compared to EPS.
The paper will present the MicroWave Sounder (MWS) instrument. MWS is a cross-track scanning total power microwave radiometer. MWS is embarked on Satellite-A. It provides temperature and water vapor profiles and in addition information on cloud liquid water. MWS has 24 channels in total, covering frequency range from 23 GHz up to 230 GHz. All MWS channels are measured with a single polarisation (QV or QH). The MWS footprint ranges from 40 km at lowest frequencies to 17km at higher frequencies. The MWS instrument scans +/-49 degrees around nadir and provides contiguous or overlapping spatial sampling for all channels. MWS has a non-uniform scanning profile, which maximises the scene viewing time. A quasi-optical system is used to co-locate all channels into one “main beam”.
The MWS structural and thermal model (STM) has completed all testing at instrument level and at complete satellite level, confirming the mechanical and thermal design. The MWS engineering model (EM) test campaign is completed. The MWS Proto-Flight Model (PFM) is under its environmental and calibration test campaign and will be shipped to the Satellite-A Prime for integration, testing and launch preparation. For the MWS PFM and the recurrent models FM2 and FM3, a complete calibration system was developed to calibrate the MWS instrument in vacuum over operational temperature range and over the complete field of view. Final instrument verification will be performed during the thermal vacuum test campaign and the yearly storage health check of MetOp-SG Satellite-A. For these campaigns, specific calibration targets have been developed as well.
This paper will present the MWS instrument, including test results available. The MWS calibration approach will be described at instrument level. In addition, based on measurement results on-ground, a final performance prediction for flight will be given.
The contribution of our colleagues at RAL, Deimos, TAS Italy, DA Design, Norspace, SENER and Thomas Keating is key to meet the challenging MWS instrument mission requirements.
MetOp-SG is the follow-on of the first generation series of Meteorological Operational (MetOp) satellites, which currently provides operational meteorological observations from polar orbit. MetOp-SG Program is implemented in collaboration with EUMETSAT where ESA is developing satellite prototype, associated instruments and will procure, on behalf of EUMETSAT, recurrent satellites. MetOp-SG will consist of two series A and B of satellites with three satellites of each series, each of which designed for an in-orbit lifetime of 7.5 years to provide continuous operation for more than 20 years.
The paper will present a polarimeter instrument namely the Multi-viewing, Multi-channel, Multi-polarization Imager (3MI) instrument, embarked on Sat-A.
Polarimetry is considered today to be a crucial technique in atmospheric remote sensing for characterization of airborne aerosol parameters, their particle types and sizes, aerosol optical depths, refractive index, sphericity, height index and absorption. When such products will be used as constraints to climate models they will provide Improved Air Quality Index and Aerosol Load Masses for different particles sizes and therefore ambient particulate air pollution. Being aerosol properties fully and unambiguously derivable only by measuring Top of Atmosphere polarized radiances at several wavelengths and several viewing angles, 3MI is designed for it. 3MI will contribute as well to Numerical Weather Prediction and as secondary objective to improve cloud characterization in terms of cloud phase, microphysics (phase and effective particle size), height and optical depth. 3MI is therefore a key role player in the future of climatology, air quality and pollution characterization.
The 3MI Multi-channel and Multi-polarization properties are achieved with successive acquisitions of polarized and un-polarized spectral bands performed using a rotating filter wheel, which interposes and changes the Band Pass filters in the optical path of both the VNIR and SWIR channel. VNIR’ one covering 410 nm to 910 nm and SWIR’ one from 1370 nm to 2130 nm. Filter wheel includes two concentric coronas of filters stacks slots. The external one dedicated to VNIR channels - 21 slots and 1 shutter - and the internal one dedicated to SWIR channels - 9 slots and 1 shutter. VNIR’ slots allow to acquire 6 spectral bands with 3 different polarization axes orientation (+60°, 0°, -60°) and 3 un-polarized spectral bands. SWIR’ slots allow to acquire 3 spectral bands with 3 different polarization axes orientation (+60°, 0°, -60°). The concentricity of the two coronas allows co-registration between VNIR and SWIR channels.
The 3MI Multi-viewing property is achieved by performing forward, nadir and backward observations of the same target on Earth at different instants, using a push-broom scanning concept, a very wide-field optical design and matrix focal planes. Indeed, the VNIR and SWIR objectives are designed to provide wide angle FoV unvignetted images (+/-57° for VNIR and +/-53.5° for SWIR), thus allowing to acquire 14 views in VNIR spectrum and 12 views in SWIR spectrum for the same ground target.
In parallel an E2E simulator composed by the combination of an Instrument Data Simulator (IDS) and a Ground Processor Prototype (GPP) has been developed. GPP will be exploited to support instrument on-ground calibration and performances verification up to Level 1b1.
This paper will present the 3MI instrument design, main performances and industrial progress. 3MI on-ground calibration will be presented as well with prediction of the final performances based on PFM calibration campaign results.
MetOp-SG is the follow-on to the current, first generation, series of MetOp satellites, which is now established as a cornerstone of the global network of meteorological satellites. MetOp-SG is required to ensure the continuity of these essential meteorological observations, to improve the accuracy and resolution of the measurements, and also to add new measurements/missions.
The MetOp-SG Programme is being implemented by ESA in collaboration with EUMETSAT. ESA is developing the prototype MetOp-SG satellites, including most of the associated instruments, and is procuring, on behalf of EUMETSAT, the recurrent satellites. ADS is the prime contractor for the development and production of the 2 series of MetOp-SG satellites and leads an European industrial consortium including the entities responsible for the development of 6 instruments from the total of 10 that are part of the MetOp-SG Program. RUAG-Space AB is part of this consortium and is the company developing the Radio Occultation instrument.
The MetOp-SG will consist of two series of satellites (Sat-A and Sat-B), with three satellites of each series. This mission will provide continuous operation from polar orbit for more than 20 years.
RO (Radio Occultation) instrument will be embarked on both satellites A and B. The RO mission primary objectives are to provide temperature and water vapour profiles. The RO measurements will also be used to derive ionospheric information, the tropopause height, the height of planetary boundary layers and surface pressure.
The RO instrument is a GNSS receiver tracking signals from navigation satellites at the limbs of the earth. The occultation signals are measured via two occultation antennas, one facing the satellite velocity direction and the other one the satellite anti-velocity direction. A third antenna on the zenith side is tracking satellites for determination of the satellite position, velocity and time.
The RO instrument will provide 1850 occultation measurements per day, thanks to simultaneous tracking of Galileo, GPS and BeiDou satellites. The instrument has capacity to support a 4th constellation in a possible future upgrade for e.g. GLONASS. GNSS signals on L1 and L5 frequencies are tracked by means of both Closed loop and Open loop tracking. Both data and pilot signal components can be tracked simultaneously. This allows achieving an accuracy of the bending angle retrieval better than 0.5 µrad at 35km altitude.
This paper will present the RO instrument design, main performance and industrial progress. The RO Prototype Flight Model (PFM) was successfully delivered to ADS-Toulouse in 2020, and the Flight Model 2 (FM2) to ADS- Friedrichshafen in 2021.
MetOp-SG is the follow-on to the first generation series of Meteorological Operational (MetOp) satellites, currently providing operational meteorological observations from polar orbit. MetOp-SG is required to ensure the continuity of these meteorological observations, to improve the accuracy, resolution of the measurements, and also to add new measurements/missions. The MetOp-SG Programme is being implemented in collaboration with EUMETSAT. ESA is developing the prototype MetOp-SG satellites, including many of the associated instruments, and will procure, on behalf of EUMETSAT, the recurrent satellites. The MetOp-SG will consist of two series of satellites (A and B), with three satellites in each series. This mission is part of the EUMETSAT Polar System Second Generation (EPS-SG) and shall provide continuous operation for more than 20 years, with each satellite being designed for an in-orbit lifetime of 7.5 years.
The Wind Scatterometer (SCA) instrument, the successor of ASCAT, will be embarked on all B satellites. The SCA mission primary objectives are to provide measurements of the ocean surface wind vectors, soil moisture, snow equivalent water and sea-ice extent and type. An Earth coverage of 99 % will be reached within two days, with nominal spatial resolution of 25 km.
The SCA is a real-aperture pulsed imaging radar operating in C-band (5.355 GHz). The SCA comprises the SCA Electronic Subsystem (SES) and the SCA Antenna Subsystem (SAS). The SES commands and controls the SCA instrument, allowing RF pulse transmission and echo reception. The SAS comprises 6 antennas, employing radiating slotted waveguide panels (SAS Panels) internally fed by bar-line (true time delay) as well as waveguide Beam Forming Network (BFN), and are configured in 3 antenna-pair assemblies (one MID and two SIDE antenna assemblies). After launch and release of the Hold Down and Release Mechanisms (HDRM) the two side antenna pairs (4 antennas) are deployed using the two Deployment Mechanisms (DLM). These SIDE antennas are all vertical polarised. Both MID antennas are dual-beam, with one beam being vertically and the other horizontally-polarised. Each of the antennas (HH, HV/VH and VV) acquire a continuous image of the normalized radar backscatter coefficient from two 650 km wide swaths at each side of the sub-satellite track.
In comparison to ASCAT, the required spatial resolution of the SCA is doubled, the swath width is increased and polarimetric capability is added. The main challenge of SCA is that accurate wind field measurements (wind speed and direction) require stringent radiometric stability and low bias over both orbit and lifetime. This has been addressed by very stable antennas and by a sophisticated internal calibration scheme covering the SCA electronic boxes: Digital Control Unit (DCU), RF Up-converter (RFU), High Power Amplifier (HPA) employing a Vacuum Tube Amplifier (VTA), High Power Redundancy Switch (HRS), Harmonic Filter (HFIL) and finally the SCA Front End (SFE) guiding the radar pulses and echoes to and from the relevant antenna ports.
This paper will present the SCA instrument design, calibration concept, main performance and industrial progress. The SCA Critical Design Review Close-Out was successfully achieved in March 2020. Testing of SES EM and SAS PFM was completed in 2021.
The contribution of our colleagues at CRISA (DCU), TAS Spain (RFU), Airbus Germany (HPA), CPI Canada (VTA), Honeywell UK (HRS, HFIL, SFE), SENER Spain (HDRM, DLM), Airbus Italy (BFN) and RUAG Sweden (SAS Panels) is key to meet the challenging SCA instrument mission requirements.
MetOp-Second Generation is a follow-on system to the first generation series of MetOp (Meteorological Operational) satellites, which currently provide operational meteorological observations from polar orbit. It is part of the EUMETSAT Polar System Second Generation (EPS-SG). From a space segment perspective, MetOp-SG is two series of satellites (Satellite-A and Satellite-B) with three satellites of each series. The aim of the mission is to ensure continuity of the essential operational meteorological observations from polar orbit, in the 2023-2043 timeframe, to improve the accuracy, resolution, dynamic range of the measurements and to provide new measurements/instruments compared to the first generation of EUMETSAT Polar System (EPS).
MWI (MicroWave Imager) is one of the five sensors embarked on Satellite-B and it is a conical scan total power microwave radiometer aimed at providing geolocated measurements in 26 channels ranging from 18.7 GHz up to 183.31 GHz, offering dual polarisation measurements up to 89 GHz for cloud and precipitation observations as well as water vapour and temperature gross profiles. Channels in the oxygen absorption regions between 50 and 60 GHz and at 118 GHz are one of the innovative features of MWI, enabling the retrieval of information on weak precipitation and snowfall, typically affecting the weather at high latitudes.
The MWI instrument has a direct heritage from its predecessors as the Special Sensor Microwave/Imager (SSM/I), the Advanced Microwave Scanning Radiometer-EOS (AMSR-E) and the Global Precipitation Mission Microwave imager (GMI). MWI will provide global microwave imaging data useful to retrieve information on precipitating and non-precipitating liquid and frozen hydrometeors, information on water vapour content and relevant surface characteristics (e.g. windspeed over ocean and sea-ice coverage).
The instrument collects the radiation coming from the Earth by means of a rotating antenna, composed by an offset parabolic reflector and a feed-horns cluster. The Earth is acquired at an angle of +/- 65° in azimuth for the fore view. Every rotation, two other angular sectors are used to calibrate the measurements, with the instrument looking at cold sky and at a fixed microwave calibrated target.
The required radiometric sensitivity calls for having the receivers as close as possible to the horns, thus implemented in the rotating part. The purpose of the receivers is to deliver signals, the magnitude of which is proportional to the incoming microwave power in the relevant band (i.e. brightness temperature of the scene). Depending on specific channel requirements and technical constraints, direct detection or heterodyne configuration are used. Two units in the rotating part, the Front-End Electronics (FEE) and the Control and Data Processing Unit (CDPU), perform the power distribution and receivers signal digitization. A rotating joint (PDTD) allows the transfer of the electrical signals (digitized radiometry data, TM/TC, power supplies, heater lines) between the fixed part and the rotating part.
In parallel to the hardware development of the instrument, a first version of the L1B end-to end simulator, composed by the on-board (Instrument Data Simulator – IDS) and the on-ground (Ground Processor Prototype – GPP) data processor, has been developed . The GPP has been already used to verify the performances of the MWI Engineering Model on-ground and will be also exploited for a cross-validation with the operational MWI ground processor during the satellite in-orbit verification phase.
After the finalization of the instrument design, the instrument qualification has started with a Structural Thermal Model (STM) environmental test campaign (vibration, acoustic, shock, TVAC) and continued with the Engineering Model (EM) testing finishing by the end of 2021. The EM testing is including radiometric performance under Thermo-Vacuum conditions that will be presented at the symposium. Low channels antenna testing has also been performed on the refurbished STM confirming good correlation between modeling and testing. High frequency channels antenna testing is planned to be performed on the instrument Proto Flight Model (PFM) in summer 2022. Instrument PFM is currently under integration with all the critical PFM units delivered to the instrument prime.
The present paper will provide an overview of the MWI instrument objectives and design and present the latest development status and performance assessment.
MetOp-Second Generation is a follow-on system to the first generation series of MetOp satellites, which currently provide operational meteorological observations from polar orbit. It is part of the EUMETSAT Polar System Second Generation. From a space segment perspective, MetOp-SG is a series of two parallel satellites (Satellite-A and Satellite-B) with three satellites each. The mission will ensure continuity of the of essential operational meteorological observations from polar orbit in the 2022-2042 timeframe. It is aimed to improve the accuracy, resolution and dynamic range of the measurements and to provide new measurements/instruments compared to EPS.
ICI (Ice Cloud Imager) is one of the five sensors embarked on Sat-B and it is a conically scanning total power millimeter and sub-millimeter wave radiometer. It will provide unprecedented ice cloud and in particular cirrus cloud observations such as cloud ice effective radius and cloud ice water path with a limited altitude information.
ICI is a novel instrument. It will deliver calibrated and geolocated global data in 13 channels between 183 GHz and 664Ghz in 5 frequency bands. Observations with a 3dB footprint of 16km are made at an incidence angle of 53° with an overlap of 40%.
The instrument collects the radiation coming from the Earth by means of a rotating antenna (45rpm), composed of an offset parabolic reflector and a feed-horn cluster. The scene is acquired at an angular range of +/- 65° in azimuth with respect to the flight direction. Two other angular sectors are used to calibrate the instrument looking either at cold sky or at a temperature stabilized hot calibration target once per turn.
The instrument consists of a rotating part and a fixed part. The rotating part contains the receiver front end, the back ends for the spectral detection and the control data processing unit providing the power to the rotating part and the communication to the fixed part. The rotating part is covered under a sunshield. The fixed part contains the instrument control unit the scan electronics and the calibration assembly composed of the hot calibration target and the reflector for the cold-sky view. A scan mechanism controlled by the scan electronics and a power and data transfer device connect both parts of the instrument.
In parallel to the hardware development of the instrument, a common end-to-end simulator for ICI and MWI is under development. It consists of the instrument data simulator and the ground processor prototype (GPP). The GPP will also be exploited to verify the performances of the ICI models on-ground and for a cross-validation with the operational ICI ground processor during the satellite in-orbit verification phase.
The test campaign of the engineering model has successfully been completed. The integration of the proto flight instrument model (PFM) is ongoing. The environmental and radiometric performance test campaign of the PFM will start in March 2022 followed by the EMC and the antenna test.
This paper will provide an overview of the ICI instrument objectives and design and present the latest development and performance status.
A prototype processor for water quality exploiting PRISMA satellite hyperspectral images has been developed in the framework of the ASI project “Sviluppo di Prodotti Iperspettrali Prototipali Evoluti” (Contract ASI N. 2021-7-I.0). The main objective of the project is the prototyping of a subset of Level 3 / Level 4 value-added products to be retrieved by processing Level 2 hyperspectral data. The Water Quality Prototype is a combination of state-of-the-art techniques for the retrieval of the following parameters, useful for the characterization of both inland and coastal waters: Phytoplankton, Total Suspended Matter (TSM) and Bottom Substrate. The prototype processor ingests at-surface reflectance product and implements adaptive semi empirical, semi-analytical and analytical methods for parameters retrieval.
An adaptive band ratio algorithm was developed for the retrieval of the concentration of phytoplankton primary photosynthetic pigment (Chlorophyll-a (Chl-a)) and the accessory pigments (e.g. phycocyanin). The processor exploits the diagnostic reflectance spectral feature of Chl-a and phycocyanin. Chl-a is correlated with both the height and position of the red-edge scattering signal near 700 nm, which shifts towards increasing wavelengths as biomass increases. Thanks to the spectral resolution of the PRISMA sensor, the relative maximum and minimum diagnostic features can be performed pixel-based and adaptively identified in the image scene. Moreover, the prototype processor implements dedicated algorithms to retrieve TSM concentration and water turbidity exploiting different wavelengths in the visible or near-infrared range. A bio-optical model inversion allows the retrieval of chlorophyll, coloured dissolved organic matter and non-algal particulate matter concentration for optical deep water, or bottom substrate coverage abundances (e.g. macrophytes, sand, rocks) for optically shallow waters. Model parameters consider the Inherent Optical Properties specific for the case studies and different bottom spectral properties. The developed approach assumes a relative linear mixed distribution of up to three different substrates and a relaxed constraint hypothesis for modelling the contribution of the substrates in bottom reflectance.
The proposed techniques have been tested on PRISMA data, acquired over different Italian lakes (Lake Garda, Mantua, Varese and Trasimeno) and coastal areas (northern Adriatic Sea). The preliminary results of the prototype products validation show a good agreement respect to the in-situ data.
The rich spectral information captured by hyperspectral satellite sensors make them useful for a number of real-world applications. Detection of a target with known spectral signature, when this target may occupy only a fraction of the pixel, is an important issue in hyperspectral applications. In this work, we describe the algorithm for material detection developed and validated in the framework of the ASI Contract no. 2021-7-I.0, whose main objective is the prototyping of Level 3 / Level 4 value-added products based on hyperspectral satellite data.
A classical approach to the problem of sub-pixel material detection from hyperspectral data is based on the generalized likelihood ratio test (GLRT). However, this approach needs some assumptions about the background distribution to make the math work, thus the performance of the detector strongly depends on how well the background has been statistically characterized. The background is usually modelled using a multivariate Gaussian distribution, and the mean vector and the covariance matrix needed to compute the GLRT are replaced with their sample estimates, retrieved using a subset of image pixel as the background. In this work, several approaches for computing the background statistics have been explored. The global approach uses the whole image to compute the background statistics. In the local approach, the image was divided into tiles to estimate the local background statistics. Finally, a cluster-based approach was developed, using several clustering options (e.g. K-means and Expectation Maximization), as well as several choices of cluster selection for modelling the background. In all cases, some exclusion techniques were tested to remove pixels that have a spectral signature similar to the target, therefore increasing its contrast to the background.
The output of the developed prototypal detector includes the GLRT values for the image, called soft detection map or “heat map”, and the binary map created applying a threshold on the GLRT to select pixels that likely contain the target material, named hard detection map.
The performance as well as the robustness of these detectors have been evaluated, with very promising results, on real hyperspectral data acquired by the ASI PRISMA Italian mission. The proposed solutions is worthy of interest for real applications.
The PRIS4VEG project aims at the development and optimization of techniques and algorithms for an innovative and quantitative monitoring of vegetation in agricultural and forest ecosystems using PRISMA data. In particular, the PRIS4VEG project focuses on the generation of functional plant trait (i.e. biophysical, biochemical and ecophysiological) products from PRISMA reflectance data and their use for the generation of higher level products related to ecosystem functional diversity and ecosystem heterogeneity.
In this contribution, we exploit hyperspectral data cubes collected by the PRISMA satellite to develop and test a hybrid retrieval workflow for forest trait mapping and to estimate spatial patterns of plant functional diversity of mixed forest ecosystems. The hybrid retrieval scheme consisted in the use of physically based radiative transfer models for the forward simulation of a set of spectral responses as a function of the model input variables, and in the use of machine learning regression algorithms to learn the relationships between the simulated spectra and the model input variables. The model trained on the simulated dataset was then applied to the real remotely sensed spectra for estimating the traits of interest. Finally, the accuracy of the proposed retrieval scheme was evaluated against ground data collected in correspondence of PRISMA overpasses. Based on their ecological importance in terms of plant functioning, the plant traits investigated at leaf and canopy level were: leaf chlorophyll content (LCC), leaf water content (LWC), leaf mass area (LMA) and leaf area index (LAI).
Strong correlations were found between measured and predicted values of the LCC and LAI. Slightly worst results were achieved for LMA and LWC. Overall, the estimated LCC and LAI showed value distributions within the ranges expected based on the field measurements. The LCC and LAI maps showed some similarities, but they were not totally correlated. Based on the tree species functional composition obtained from a previous classification performed using airborne data, we observed that LAI showed a larger inter-species variability. Also, the LCC did not differ significantly in regeneration and mature stands, while LAI was higher in the regeneration stands.
Finally, emerging methods aimed at the estimation of ecosystem heterogeneity (representing the degree of non-uniformity in land cover, vegetation and physical factors) through spectral diversity metrics were applied to hyperspectral reflectance and vegetation indices. Information theory metrics were also computed on vegetation trait maps to characterise the ecosystem heterogeneity. The spatio-temporal patterns in the ecosystem heterogeneity maps were discussed in relation to several factors/processes expected to drive biodiversity changes in the study areas.
The results obtained in this study demonstrated that the retrieval of a broad set of leaf and canopy traits from space using hybrid retrieval schemes is feasible, paving the way for future operational algorithms for the routine mapping of vegetation traits from spaceborne sensors. Also, the use of hyperspectral reflectance can improve our ability in biodiversity mapping compared to state-of-the-art measures based on broad band vegetation indices available from current platforms.
The cryosphere has a relevant role for understanding the changes on our planet. Because of its importance, cryosphere has been investigated by addressing different remote sensing instruments and techniques. Even though many products and studies exist from multispectral and radar images, the exploitation of hyperspectral images is still in an early phase because of the limited availability of these sensors up to now. The study of the cryosphere by means of remote sensing hyperspectral data in the domain of reflected solar radiation (400-2500 nm) can contribute to determine key surface parameters such as albedo, grain size, liquid water content, concentration of organic and inorganic impurities, distribution of debris cover on glaciers, proglacial lakes and their surface characteristics. In this context, PRISMA data represent a unique opportunity for the development and optimization of algorithms for the estimation of physical parameters of snow and ice and are particularly well suited for investigations in complex morphologies such as alpine areas.
In this contribution, we present the main activities undertaken within the SCIA project founded by the Italian Space Agency. The main goal of the project is the development and optimization of algorithms to generate products useful for monitoring the cryosphere in different geographic and climatic context, with particular focus on mountainous alpine areas. The overall methodology is based on the jointly exploitation of satellite images, in-situ measurements, and radiative transfer modeling.
Particular attention will be initially paid on the generation of surface reflectance corrected for topography and including adjacency effects. We will present simulations performed by using radiative transfer models and the influences of the snow parameters on surface reflectance. Preliminary algorithms for detecting snow parameters, such as albedo, grain size and light absorbing impurities will be also presented at different scales. For assessing processes due to glacial-lakes interaction we will show some examples of deglaciation processes revealed by the amount of suspended solids into the lake. By exploiting the state-of-the-art solutions of hypersharpening (fusion of PAN and hyperspectral images) we map lakes in terms of number, size and shape and to compute the chromaticity coordinates and the dominant wavelengths to support the analysis of lake water properties. Finally, some examples of debris covered glacier are also addressed. Although the high level of accuracy and automation achieved to map ice and snow by satellite sensors, the recognition of supra-glacial debris is still an issue when the glacier snout is debris covered, and more in general, for those glaciers that are partially or totally debris-covered, the exploitation of spectroscopic method help the detection of such conditions.
In summary, imaging spectroscopy is promising for the detection of all these parameters and the possibilities offered by PRISMA to detect subtle spectral features can open new perspectives in the remote sensing of the cryosphere.
The idea of the PRIMARY (PRIsma for Monitoring AiR qualitY) project, recently co-funded by the Italian Space Agency (ASI), is to exploit the potentialities provided by the ASI-PRISMA (PRecursore IperSpettrale della Missione Applicativa) mission for monitoring particulate-matter related air quality at an urban/sub urban scale. In particular, the project aims at using the information contained in the PRISMA hyperspectral data to extract information on the atmospheric particulate matter concentration and composition. At the present state of art, a satellite-based product that provides a ‘speciated’ aerosol load as the one foreseen in PRIMARY is not available, in particular in terms of spatial resolution and computational time for image processing.
Given the complexity of the hyperspectral data inversion, PRIMARY will follow a physics-based machine learning approach. More specifically, we will exploit the ability of neural networks (NNs) in recognizing even very weak and highly non-linear relationships between radiances derived from the PRISMA hyperspectral sensor (NN input) and aerosol chemical-physical quantities (NN output). For training the NNs, a novel dataset will be generated starting from a number of statistically significant atmospheric and aerosol profiles provided by the Copernicus Atmosphere Monitoring Service (CAMS), with the related aerosol optical properties obtained through a post-processing tool called FlexAOD (https://doi.org/10.5194/acp-19-181-2019). Finally, the corresponding synthetic electromagnetic measure simulating the PRISMA acquisition in terms of spectral radiance is obtained using a radiative transfer model (e.g. Libradtran). This brand new dataset will have a global coverage in order to enable the operability of the PRIMARY algorithm on a large scale.
A crucial part of PRIMARY will be the validation phase and output refinements. Within the Project, “ground truth” measurements will be collected within specific field campaigns to be held in the urban area of Rome (Italy), which has been selected as Pilot Target area of the project. In this framework, ground-based and in situ aerosol observations (e.g. from the BAQUININ super site) will be coupled to measurements with aerial systems (UAV and/or helicopters) to better characterize the aerosol variability within the atmospheric column.
Thanks to the foreseen flexibility of the neural network algorithms, in the final phase of the project, the developed methodology will also be evaluated for data fusion schemes between PRISMA and other missions focusing on atmospheric research, in particular Sentinel-5P/TROPOMI.
A multidisciplinary group, with complementary expertise, composes the PRIMARY team. It will be coordinated by the Earth Observation research group of the University of Rome “Tor Vergata” that will be also engaged in the NN algorithms design and development. The University of L’Aquila, together with the Institute of Atmospheric Pollution Research (CNR-IIA), will work on model-based synthetic data generation. The Institute of Atmospheric Sciences and Climate (CNR-ISAC) will lead the validation activities in collaboration with SERCO, that will also develop an automatic procedure for PRISMA products analysis and co-location with data collected within field campaigns. Moreover, several environmental agencies, at both regional and national level, will provide support for the final products user requirements definition and validation.
Methane emissions from fossil fuel production activities typically happen as so-called “point emissions”, namely plumes emitted from small surface elements and containing a relatively large amount of gas. The detection and elimination of these methane emissions have been identified as “low hanging fruits” to reduce the concentration of greenhouse gases in the atmosphere.
Satellites offer a unique capability for global monitoring of methane emissions. The retrieval of methane from space measurements typically relies on spectrally-resolved measurements of solar radiation reflected by the Earth's surface in the shortwave infrared (SWIR) part of the spectrum (~1600–2500 nm). The Sentinel-5P TROPOMI mission allows to monitor methane emissions around the globe at regional scales with a daily resolution, but lacks the spatial resolution needed to pinpoint single point emissions.
Imaging spectrometers, such as PRISMA, sample that 2300 nm region with tens of spectral channels and a typical spatial resolution of 30-m, which can be exploited for the detection of point emissions and the attribution to particular emitting elements. PRISMA is currently the only 400–2500 nm imaging spectrometer with potential for high resolution methane mapping currently accessible to the international science community
In this contribution, we evaluate the potential of PRISMA to map methane point emissions. Our retrieval of methane concentration enhancements is based on a matched-filter based algorithm applied to PRISMA spectra in the 2300 nm shortwave infrared spectral region. We perform a simulation-based sensitivity analysis to assess the retrieval performance for different sites. We find that surface brightness and homogeneity are major drivers for the detection and quantification of methane plumes with PRISMA, with retrieval precision errors ranging from 61 to 197 parts-per-billion in the evaluated images. The potential of PRISMA for methane mapping is further illustrated by real plume detections at different methane hotspot regions, including oil and gas extraction fields in Algeria, Turkmenistan, and the USA (Permian Basin), and coal mines in the Shanxi region in China. Our study reports several important findings regarding the potential and limitations of PRISMA for methane mapping, most of which can be extrapolated to upcoming satellite imaging spectroscopy missions.
Operational activities in the field of flood monitoring and prevention benefit from the availability of synthetic aperture radar (SAR) images. The main advantages of SAR data are synoptic views over wide areas, day and night acquisitions independent of weather conditions, as well as a reliable and high frequency data acquisition schedule. The Copernicus program, European Union's Earth observation (EO) programme, opens the door to disruptive innovation in the domain of floodwater monitoring and, more broadly, emergency management, due to its Sentinel-1 SAR mission’s capability to systematically, globally, and frequently acquire high quality EO data at 20 m spatial resolution with a revisit time of 2-3 days over Europe. In order to rapidly translate the large volume of SAR data into floodwater maps and value adding services, the European Commission’s Joint Research Centre (JRC) recently added Global Flood Monitoring (GFM) products based on Sentinel-1 as a new component to its Copernicus Emergency Management Service (CEMS). The GFM products are obtained by processing all incoming Sentinel-1 SAR images within 8 hours after data acquisition to systematically monitor flood conditions at global scope. While past analyses were limited to pre-identified flood images in the framework of CEMS, the current implementation processes all incoming images in a fully automatic way, thereby eliminating the time required for necessary human interventions. To reach this degree of automation, the system takes advantage of the constantly updated 20 m Sentinel-1 data cube made available by the Earth Observation Data Centre (EODC) facilities.
It is requisite that the Sentinel-1 based retrieval algorithm, as one of the core components of GFM, is both efficient and robust. Moreover, it is designed to balance two objectives: to detect water at high accuracy (i.e. permanent and seasonal water bodies, and floodwater), while minimizing the identification of false alarms due to water-look-alikes surfaces that can be confused with floodwater. To reach a high degree of robustness, an ensemble-based mapping algorithm is implemented, which combines three independent floodwater mapping algorithms driven by different approaches. 1) LIST’s algorithm that requires three main inputs: the most recent SAR scene to be processed, a previously recorded overlapping SAR scene acquired from the same orbit and the corresponding previously computed flood extent map. The change detection algorithm maps all increases and decreases of floodwater extent and makes use of this information to regularly update the flood extent maps. To do this, it uses a hierarchical split-based approach, region growing and an adaptive parametric thresholding. 2) DLR’s algorithm requires one scene as the main input and further exploits three ancillary raster datasets: i.e. a digital elevation model (DEM), areas not prone to flooding and a reference water map. To map flood extent, it makes use of non-parametric hierarchical tile-based thresholding, region growing and fuzzy logic. 3) TU Wien’s algorithm requires three input data sets: i.e. the SAR scene to be processed, a projected local incidence layer, and the corresponding parameters of a previously calibrated multitemporal harmonic model. Based on these inputs, the probability of a pixel belonging to the flood or non-flood class is defined.
The final floodwater map is obtained by integrating the results of the three independently developed algorithms. Pixelwise flood classifications are based on majority voting, such that at least two algorithms are in agreement. To contextualize the ensemble-based observed flood extent maps, the GFM system also provides a reference water mask derived from multi-temporal Sentinel-1 data. The combination of the reference water mask with the observed flood extent product results in the observed water extent.
The observed flood extent map is delivered with uncertainty values informing on the certitude of a pixel being classified as flooded. Moreover, an exclusion map identifies all areas where the detection of water using Sentinel-1 data is hampered by the presence of dense vegetation, urban areas, radar shadow regions, permanently low backscattering areas (e.g. sandy areas), and non-flood prone areas, i.e. those that have a Hight Above Nearest Drainage (HAND) value above 15 m. Finally, advisory flags are provided to make users aware of large-scale dryness and wet snow cover (both potential sources of over detection), or of wind (a major source of under detection). GFM is not only a system that systematically and fully automatically processes all images acquired by the Sentinel-1 mission in near real time, it also provides access to a global record of flood maps based on the processing of the entire Sentinel-1 collection since its start in 2015. This record provides valuable information to assess flood hazard and risk at 20 m resolution at a global scale. All these products are integrated in the Global Flood Awareness System (GloFAS), where end-users can visualize, analyze and download the data.
The algorithm is currently being extensively tested for different regions all over the world. A first quantitative evaluation shows encouraging results in relation to the accuracy for delineating the evolution of water bodies and further improvements to increase the accuracy of the GFM product is ongoing.
In this presentation the audience will be introduced the Worldwater project and the results of the Round Robin inter-comparison of inland surface water detection and monitoring algorithms using Sentinel-1, Sentinel-2 and Landsat 8 imagery.
Background
Water withdrawals globally have more than doubled since the 1960s due to growing demand – and showing no signs of slowing down. Population growth, socioeconomic development and urbanization are all contributing to increased water demand, while climate change induced impacts on precipitation patterns and temperature extremes further exacerbate water resource availability and predictability. In the future there will be an increasing need to balance the competing demands for water resources and have more efficient ways to manage water supply. The need for proper and timely information on water (non-) availability is probably the most important requirement for water management activities. In large, remote and inaccessible regions, in-situ monitoring of inland waters is sparse and hydrologic monitoring can benefit from information extracted from satellite earth observation (EO).
Earth Observation is an essential source of information, which can complement national data and support countries to collect regular information on the changes to their surface waters. Ever since the launch of the first earth observation satellites in the early 70ties the mapping and monitoring of surface water has been a subject attracting interest from researchers and practitioners in hydrology, environmental conservation, and water resource management. The field has gradually evolved and incentivized by the steady built up of long-term archives of global satellite data and the compute resources for analyzing those data. A significant breakthrough in the adoption of EO solutions has been the European Commission Joint Research Center’s Global Surface Water Explorer [JRC-GSWE] (Pekel et al., 2016) and the Global Land Analysis and Discovery group’s Global Surface Water Dynamics [GLAD-GSWD] (Pickens et al., 2020). Despite these developments and the large track record of related successful case studies on surface water mapping there is still a lack of clear, robust and efficient user-oriented methods and guidelines that allow for using Earth Observation data at scale and on an operational basis for surface water mapping and monitoring.
The WorldWater project
One of the main goals of the the WorldWater project is to contribute to the formulation of new best practices for mapping and monitoring surface water with Earth Observation (EO) data. A particular topic is to advance the monitoring of surface water extent dynamics by taken advantage of the new enhanced capabilities of the latest generation of open and free satellite data from the European Copernicus programme. For the first time in history, the Copernicus programme provides users with access to globally and systematically acquired Synthetic Aperture Radar (SAR) data. This is a major breakthrough which can contribute to more robust monitoring in environments challenged by frequent cloud cover and periods with limited light such as high latitudes. Yet, surface water mapping with SAR data is still complicated by a number of scientific challenges (e.g., topography, wind, low-backscatter surfaces other than water) and why the synergistic use of optical and SAR data emerges as an interesting alternative, with the potential to take advantage of the individual sensors’ strengths, while minimizing their weaknesses.
Lately there has been some promising studies showing the strength of such a sensor fused mapping approach, yet there has been no systematic evaluation against other leading approaches for surface water mapping. Therefore, the WorldWater project organised a Round Robin exercise aiming at the inter-comparison of EO algorithms for surface water detection, using the latest generation of free and open satellite data from Sentinel-1, Sentinel-2 and Landsat 8. In total the WorldWater Round Robin was joined by 15 organizations representing a mix of research institutions, private companies, government agencies and non-governmental organizations. All participants were asked to produce monthly maps of surface water presence at 10-meter spatial resolution for 2 consecutive years and over 5 challenging test sites. The outputs generated by the round robin participants across the test sites were evaluated individually and in cross-comparison using a harmonized independent reference data set.
The initial Round Robin results indicate that single sensor approaches can produce accurate and consistent water maps under ideal conditions, and yet across a range of challenging environments the synergistic usage of optical and SAR data delivers more accurate and consistent outputs. By comparing the robustness of the different algorithms, the Round Robin will help contribute to a better understanding of the pros and cons of EO approaches for mapping and monitoring the extent of inland open waters as well as shortfalls and areas of further research has been identified.
Dams are strategic tools for countries and their management of water resources. Within Space Climate Observatory (SCO) initiative and SWOT Downstream program, StockWater project aims to put in place a system for monitoring water volume in dams. It is on satellite data, and a specific processing system, thereby facilitating the work of the public authorities in this area.
Water resources monitoring, including surface and ground water, is a vital issue for governments and public institutions. Water resources are essential for society and economic activity (drinking water, irrigation, hydroelectricity, industry, flood control) and for natural and water ecosystems.
Generally, reservoirs stock information is collected and held by the local reservoir managers (public or private). Regional and national authorities might access this information with a certain latency, which depends on national water policies. Central authorities are then confronted to 2 issues: long latencies to retrieve water stock information and sparse or inexistent information about the small reservoirs.
The project proposes a global solution to monitor reservoirs stock volumes based on frequent satellite measurements. This solution is based on reservoirs area monitoring by imaging satellites (Sentinel 1&2) based in the Surfwater processing chain (Pena-Luque et al, 2021), which integrates a multitemporal approach to improve water masks. Furthermore, StockWater innovation relies on reservoir estimation of Area/Elevation/Volume relationships just from a DEM, even when acquired after the reservoir construction. Recent prototypes have been qualified on 29 reservoirs in France, ranging from 20 to 1600 hectares, yielding uncertainties below 15% on volume rates.
New versions are evaluated on Spain, France and India with different in situ datasets, and it will be extended to Burkina Faso, Laos and Tunisia with 250 reservoirs in next stages. This system will easily allow volume estimations from Elevation measurements (altimeters Jason, Sentinel3 with limited coverage or SWOT globally).
StockWater project , leaded by CNES and developed with CS-Group and SERTIT, holds a partnership initiative with CESBIO, GET, LISAH laboratories and their local partners in Tunisia, Laos, Burkina and India. StockWater is open to new countries willing to participate.
How to feed 10 billion people in 2050? Water is a critical input for agriculture and instrumental for achieving food security. But besides increasing food production, the agricultural sector also needs to use water resources more efficiently to respond to climate change and competition from other water use sectors. The challenge is thus not only to increase food production (in kilograms per hectare) but also to improve the water productivity (in kilograms per cubic meter) which is the production per unit of water consumed. For this reason the Dutch Development Cooperation (DGIS) identified the enhancing water productivity in the agricultural sector by 25% as a key policy priority.
In this context, and as part of its ‘Remote Sensing for Water Productivity’ programme, FAO developed and implemented the WaPOR portal with open-access satellite-based data on water productivity for Africa and the Near East. The FAO WaPOR database is updated every 10 days and supports national governments and other organisations to monitor and report on (agricultural) water productivity and to identify and mitigate water productivity gaps, which contributes to a sustainable increase of agricultural production.
Irrigation is key in achieving higher food production levels: only 20 percent of the agricultural land is irrigated, but irrigated agriculture contributes to 50 percent of the total food produced. At the same time is irrigation responsible for approximately 70% of all water abstractions. New methods and technologies to monitor irrigation performance and improve water productivity are required to sustainably manage irrigation water and decide on where to take action, and will also contribute to The Sustainable Development Goal 6.4 on improved water use efficiency.
The FAO WaPOR database is a cost-effective tool to support irrigation management and improvement. IHE Delft developed a diagnostic framework that applies WaPOR data to assess irrigation performance indicators (Chukalla et al., 2021; Safi et al, 2021). The irrigation performance framework helps to detect water productivity variations in irrigated agriculture and to set targets and actions for improvement. It is a standard procedure that can be used by practitioners to translate open-access remote sensing data to actionable information.
Under the WaterPIP project this framework has been incorporated in an open-access tool that allows users to assess irrigation performance for any area covered by the WaPOR database. The tool is built in Python code and will be available in a public Github repository by the end of 2021. The repository hosts tools to extract, interpret, analyse and visualize open-access geodata to improve water productivity.
The irrigation performance indicators incorporated build further on the work of IHE Delft and help to understand how agricultural systems are performing and their potential for improvement. The following irrigation performance indicators are currently included in the Github repository:
• Equity: The degree to which deliveries are considered fair by all.
• Adequacy: The ability of a system to reach targeted deliveries in terms of quantity (discharge and/or volume) service performance to the users.
• Reliability: The degree to which water delivery conforms to the prior expectations of users.
• Efficiency: System’s ability to minimize water losses due to oversupply.
• Productivity: Measures of the efficiency of production.
The open-access Github repository is the foundation for customized apps to be co-developed with local service providers in early 2022. These customized apps will address specific user needs. A potential customized app would address irrigation scheme managers that needs regular updates on irrigation performance to ensure all farmers have access to sufficient water throughout the growing season. Another customized app would support governments and donors to identify which areas are in need of irrigation improvement or irrigation rehabilitation and what would be the most critical problems to address.
The tool is currently applied for irrigation performance analytics for irrigation schemes in Sudan and Mali. Based on the area of interest provided, the tool automatically generated performance indicators on irrigation uniformity, equity, adequacy and land and water productivity. The results set a target for improvement and quantify the scope of improvement in terms of increase in yield and a simultaneous decrease in water consumption.
The increasing availability of open-access satellite data and derived data products and services makes satellite-based information accessible to a wider community. The Sentinel missions provide timely, continuous and independent satellite data on the land and how it is used. The FAO WaPOR database provides satellite-based products that are very suitable for irrigation performance assessment. The availability of open-access and automated tools for data interpretation such as the demonstrated irrigation performance indicator tool will further increase the uptake of satellite-based information outside the geospatial community.
[References
Chukalla, A. D., Mul, M. L., van der Zaag, P., van Halsema, G., Mubaya, E., Muchanga, E., den Besten, N., and Karimi, P.: A Framework for Irrigation Performance Assessment Using WaPOR data: The case of a Sugarcane Estate in Mozambique, Hydrol. Earth Syst. Sci. Discuss. [preprint], https://doi.org/10.5194/hess-2021-409, in review, 2021.
Safi, A.R., Karimi, P., Mul, M., Chukalla, A.D., de Fraiture, C., 2021. Translating open-source remote sensing data to crop water productivity improvement actions. Agric. Water Manag. (submitted)]
Consistent estimation of actual evapotranspiration (ET) from field to continental scales is critical for reliable monitoring and reporting of Sustainable Development Goal (SDG) indicator 6.4.1 (Change in Water Use Efficiency). With this in mind FAO is running an online portal called WaPOR, which provides access to ET estimates at different spatial scales derived from observations of Landsat, PROBA-V (replaced by Sentinel-2 since 2020) and Terra and Aqua satellites. The goal of ET4FAO project (https://et4fao.dhigroup.com/) was to demonstrate the use of Copernicus data, especially combined observations from Sentinel-2 (S2) and Sentinel-3 (S3) satellites, for consistent estimation of ET from 20 m to 300 m spatial resolutions. It showed that Copernicus data, together with advanced data fusion methods and physical models, greatly improves the accuracy of ET retrieval at scales in which Terra and Aqua (MODIS sensor) data was previously used.
The ET4FAO project also exposed an inherent limitation of the S2-S3 data fusion approach, which is used to derive high-resolution land surface temperature (LST) by enhancing spatial resolution of S3 LST observations (acquired at around 1 km resolution) using S2 optical observations (acquired at 20 m spatial resolution). This is the inability to reproduce very low LST values in heavily irrigated fields, thus leading to underestimation of ET from those fields. The reason for this is the difficulty in predicting values which are far beyond the LST range of the original S3 Sea and Land Surface Thermal Radiometer (SLSTR) LST. The most extreme LST values will not be present in the SLSTR data since LST from different landscape features aggregate within the 1 km pixels. The underestimation of ET in heavily irrigated fields could represent a critical issue when it comes to SDG indicator 6.4.1 reporting since irrigated agriculture is a focus area of this indicator.
In this follow-up study we aim to improve the ability of S2-S3 data fusion approach to reproduce low LST values present in irrigated fields by incorporating Landsat LST into the data fusion methodology. Based on comparisons performed in ET4FAO project, Landsat LST with a spatial resolution of around 100 m is sufficient to capture those low temperatures. It is however important to ensure that the inclusion of Landsat LST does not compromise the capability of the data fusion approach to produce high-resolution LST on all dates on which cloud-free S3 observations were acquired. In addition, energy must be conserved when re-aggregating sharpened LST back to S3 scale to ensure physical plausibility and consistency in ET retrieval across spatial scales. Preliminary investigation has shown that this is possible. The developed method should be directly applicable with future observations from S2, S3 and Copernicus high-priority candidate Land Surface Temperature Monitoring mission.
Additionally, in ET4FAO the validation of Copernicus-based ET was focused on sites in semi-arid Mediterranean climate. However, the FAO WaPOR portal is operational across Africa and Middle East. Therefore, we extended the validation and evaluation of Copernicus-based ET beyond Lebanon, Tunisia and southern Spain also into tropical and Sahelian regions. Those new sites were added for validation where field measurements are available and for evaluation where WaPOR Level 3 (30 m) data is being produced. In addition, field measurements of LST from southern Spain from 2018 and 2019 will be used for directly validating high-resolution LST produced through data fusion.
Agriculture, through growth-related evapotranspiration, is the largest water user on the Globe.
The Sustainable Development Goals demand both yield and agricultural water use efficiency (AWUE in kg fruit per m³ of evapotranspiration) to be maximized through sustainable farming practices. Monitoring systems have to be established, which document hot and cold spots of AWUE as well as the progress towards reaching the SDGs.
Remote sensing and specifically the Sentinel 1 and 2 Sensors create independent, homogeneous data sets on global agriculture but are not by themselves capable of determining AWUE and yield. Therefore we established a global high resolution monitoring system for AWUE and yield, which is based on coupling the high-resolution Sentinel-derived agricultural data streams with the sophisticated, dynamic agricultural crop growth model PROMET. PROMET is driven globally by regionalized meteorological inputs and produces, for each location on the Globe, a wide variety of crop growth trajectories based on different farm management, which range from extensive, low-fertilizer to intensive, mechanized high fertilizer practices including irrigation. In each location each selected farming practice creates a different course of simulated LAI for each crop that can grow there. Finally, the course of the LAI measured by Sentinel 2 in each location is compared with all simulated courses to determine the likely actual crop type and farming practice at the given location. The complete set of modelled results from PROMET of the selected farming practice further allows to infer actual AWUE and yield.
A sample of 120 Sentinel-2 tiles were randomly selected ensuring representativeness for all global agro-ecological zones. For these tiles the procedure described above was carried out to determine crop-specific courses of LAI from Sentinel-2 time series on the one side and to simulate corresponding LAI-courses together with AWUE and yield for a an ensemble of farming practices.
The results from the tiles were then upscaled to cover the global agriculture. Based on an example tile (33UUS) in Saxony, Germany, the process of acquiring the crop specific LAI time series as well as the calculation of AWUE and yield will be presented. Furthermore, results for globally distributed tiles will be shown.
The Sentinel-2 satellite imagery is processed using proprietary VISTA in-house software. This comprises sophisticated methods on cloud and cloud shadow detection, atmospheric correction and derivation of plant physiological parameters with the surface reflectance model SLC (Soil-Leaf-Canopy).
Since the LAI-courses to be determined from Sentinel-2 are crop specific, a crop type classification based on spectral characteristics and leaf area index (LAI) time series is carried out. Only pixels, which are classified with a high degree of accuracy as a considered crop type, are selected for further processing. The classified crop types are the main crops of the studied Sentinel-2 tile. Furthermore, the usage of the COPERNICUS High Resolution Layers (HRL) allows working only on agriculturally used lands.
A Sentinel-2 pixel has to fulfill a set of requirements to be selected. Beyond carrying a relevant crop type, its position should be sufficiently distant from the field boundary to guarantee homogeneity, and a continuous coverage of cloud free Sentinel-2 overpasses should be available to create a time series of the LAI. The amount of selected pixels is linked to the areal size, or, if available, to the crop specific agricultural area of regarded districts or counties.
Through a crop specific lookup-table inversion of the SLC model, exact plant physiological parameters (e.g. LAI) for the complete time series of satellite imagery are derived for the selected pixels. During processing sophisticated statistical methods are applied to identify and eliminate outliers. The result consists of harmonized, daily LAI values. The process of LAI derivation is schematically shown in Fig.1.
Furthermore extensive global simulations of AWUE and yield were carried out for < 200 combinations of crops and farming practices using the SuperMUC NG High Performance Computing (HPC) Facility of the Bavarian Academy of Science. A comparison of remote-sensing derived and simulated LAI-courses then enables to identify the most probable actual farming practice on the selected field. The results, which were exemplary achieved for tile Saxony (33UUS) for 2017 for maize, are shown in Fig.2. Fig.2 shows the selected fields and an exemplary comparison of simulated (grey, selected green) and observed (orange) LAI-courses for a pixel in the NE of the tile. The comparison of simulated farming practices and Sentinel-observations for maize revealed medium to high fertilization rates, an average maize yield across the tile of 9.61 t/ha (statistics 2017: 9,65 t/ha) and a high average AWUE of 2.27 kg/m³.
The global selection of Sentinel-2 tiles representative for regional agriculture allows to upscale our approach of determining yields and AWUE to the global level. Fig.3 shows, exemplary for maize and selected Sentinel-2 tiles, the graphs of the complete simulation results for all farming practices, showing yield (t/ha) as a function of water evaporated (m³) by the maize plants. Different combinations of rainfall, fertilizer application and irrigation result in different positions of each simulated scenario in the graph, which are represented by the black dots in each graph. The dots generally follow a saturation curve, which means that simulated maize yields in each tile approach region specific maxima at the expense of more water used. Since the saturation curves in general are not linear AWUE increase with increasing simulated yields. The LAI-curves, which were measured by Sentinel-2 correspond to the green dots in the graphs, which likely represent farming practices, which were realized by the farmers under the local environmental conditions. In graph USA one can see that the green dots represent high yields and relatively low water consumption (large actual AWUE) whereas in Ethiopia the green dots represent low yields and relatively high water consumption (small actual AWUE).
Altogether, the Sentinel-2 derived information on the dynamic LAI development during the growth of maize together with massive simulations of plant growth in the selected tiles (located at the small red in Fig.2) enables to, for the first time, create a very differentiated picture of the actual AWUE of maize around the Globe. This allows to identify global hot-spots of water waste in agricultural production e.g. in Ethiopia but also in Kenia and Zambia. Enabling to realize, in a sustainable manner, the existing potentials to improve both yield and AWUE by changing farming practices (e.g. introduce fertilizer and/or irrigation). The described procedure uses massive remote sensing information (app. 18000 Sentinel-2 scenes for the processed 120 tiles per year) and can easily be repeated on an annual basis within an operational, Sentinel-2-based AWUE monitoring system. The results will be uploaded and found on the Food Security TEP(https://foodsecurity-tep.net/).
The diffuse attenuation coefficient of downward irradiance Kd (λ) is an important parameter in regulating physical and biogeochemical processes in the upper oceanic layer. Remote sensing of Kd(λ) allows for repetitive temporal and spatial measurements at the global scale. Accurate retrieval of Kd (λ) from ocean color satellite sensors is therefore of interest in constraining many oceanic processes.
Estimates of Kd (λ) from 3 different published algorithms spanning from explicit-empirical (NASA), semi-analytical (Lee et al., 2005), and implicity-empirical (Jamet et al., 2012) over 5 satellites sensors (MODIS-aqua, MODIS-terra, VIIRS–SNPP, VIIRS-JN, and Sentinel-3) were compared to autonomous profiling floats (BGC-Argo) measurements of Kd (λ) and Ed (λ, 0-) at visible wavelengths shared by the floats’ and sensors’. Photosynthetically Available Radiation (PAR) was also retrieved from both the floats and the sensors and Kd (PAR) was computed. Advantages of BGC-Argo measurements compared to ship-born ones include 1. uniform sampling in time throughout the year, 2. Large spatial coverage and 3. lack of shading by platform.
Before using them, downwelling irradiances (Ed (λ)) of the ~37000 retrieved profiles from floats were quality-controlled (QC) and extrapolated to the surface to calculate Ed (0-). The QC removed any dark signal along the Ed(λ)/PAR profile and identified profiles with clouds, wave-focusing, and spike occurrences based on the shape (Organelli et. al, 2016). Matched-up satellite observations of Remote Sensing Reflectance (Rrs) were used to compute satellite-derived Kd (λ) for comparison with in situ float measurements.
Over 2,000 matchups between Float and Satellite sensors retrieved Kd (λ) values ranging from ~ 0.01 to 0.53 m-1. Our results show that although all 3 algorithms are overall good predictors of Kd (λ), for a given sensor’s matchups, each algorithm produced statistically different Kd (λ) distributions from the others. Algorithms results diverged the most for low Kd values, i.e., for the clearest waters.
This study shows the value of using BGC-Argo floats as validation for remote sensing products. The recent development of instrumenting the fleet with hyperspectral radiometers should be encouraged, as it will provide even better constraints on algorithms associated with the upcoming NASA PACE mission, which will have a Hyperspectral radiometer.
Ocean colour data are defined as an Essential Climate Variable by the CEOS (Committee on Earth Observation Satellites). The information retrieved from ocean colour is primarily the spectrum of marine reflectance, from which other products are derived. Especially, it is possible to derive the chlorophyll concentration, an important variable reflecting the quantity of phytoplankton which is at the basis of the marine food web and plays a major role in the global carbon cycle.
However, satellite observations require regular calibration and validation, to check for potential sensor drift, local algorithm mismatch, etc. To perform a robust validation of the chlorophyll concentration, reference in situ measurements should cover a large fraction of the global ocean. However, the survey performed by the Copernicus Cal/Val Solution (CCVS) H2020 project revealed a gap of available reference measurements especially in polar regions (Ligi et al., 2021). Other issues identified concern data timeliness (which is critical for current operational missions such as Sentinel-3) and restrictive data policies on some measurements.
Thanks to their progressively increasing network, the BGC-Argo floats could address some of these limitations. Today, 218 BGC-Argo floats are equipped with fluorimeter to document the chlorophyll concentration. For ocean colour validation, these autonomous platforms present the advantage to deliver data in near real time (after a first step of quality control), and cover most of the global ocean.
BGC-Argo data are currently used operationally for the validation of ocean colour data, and specially, chlorophyll concentration in the frame of the Copernicus Marine Service (OC-TAC – Quality Control and Validation http://octac.acri.fr/).
Use of the BGC-Argo data for the validation of chlorophyll product from Sentinel-3 appears to be especially relevant to palliate the relatively few numbers of traditional HPLC in situ data for the last years. Thanks to their additional sensors (temperature, salinity, radiometer, etc…), BGC-Argo would allow to explain potential deviations between in situ and remote sensing measurements (e.g., wind induced mixing, particles origins). In addition, BGC-Argo data can be cited for the validation for other ocean colour products, as the particulate backscattering (bbp) or the diffuse attenuation coefficient (Kd).
Note that BGC-Argo are profiling floats which supply data within the water column from the surface down to 2000 m, while satellite observe the surface of the ocean. It is therefore first required to define a protocol to process BGC-Argo data and make them comparable with remote sensing data. Details about the different inter-comparison procedures and illustrations of results will be presented.
Argo floats constitute a reliable observation system that supports global and full-depth ocean data sampling programs. The addition of bio-optical sensors facilitates the multi-decadal observation of ocean phenomena with respect to biogeochemistry (BGC-Argo). Satellite observations remain a vital source to check the scientific quality of Argo data products, delayed mode (e.g. chlorophyll-a, radiometry). By construction, Argo data is now the main global source of in situ observation for Marine Copernicus Service.
BGC-Argo has recently been extended to marginal seas and optically complex waters, e.g., the Baltic Sea. Estimates of water quality indicators such as chlorophyll-a (chl-a), colored dissolved organic matter (CDOM), and total suspended matter (TSM) from satellite observations are of main interest for the routine monitoring program in the area. However, robust estimates of these parameters using the standard ocean color algorithms have always been a demanding task because of the complex optical behavior of the water. To date, validation of the existing approaches (and/or developing new approaches) that specifically to deal with such optical state still not been well investigated due to the sparsity of in situ observations, as well as the importance of seasonal variability.
Over 1600 BGC-Argo floats have been deployed by November 2021, providing an on-demand capability for observing regions like the Baltic Sea. However, observations are limited by the spectral capabilities of the sensors. Current field exercises in the Baltic Sea are evaluating the addition of hyperspectral radiometers (type RAMSES) to BGC-Argo, which measure downward irradiance from the ultraviolet (280 nm) to the end of the visible range (720 nm). We exploit the novel BGC-Argo dataset to refine empirical and semi-analytical approaches by regional tuning. The core feature here is to investigate standardized, continuous, and synchronized bio-optical profiles of chl-a, CDOM, and particulate backscattering (which vary across short spatio-temporal scales) using one sensor, in conjugation with hyperspectral radiometric observations.
The plausibility of these approaches will be related to its performance in retrieving the end- estimated product (CDOM, chl-a), comparing them to those measured by the floats, and additionally to satellite matchup. We envisage to present an example that exploits timely and consistent information (coupling in situ observations of advanced autonomous platform with satellite information) to device local approaches that constrain the spatial and seasonal patterns of bio-optical properties, in order to observe optically complex regions.
Phytoplankton modulate the planetary cycling of major elements and compounds, channel solar energy into the marine ecosystem and help keep the Earth's climate stable. Understanding how our planet is changing requires monitoring essential climate variables, like phytoplankton, at synoptic scales. Satellite remote-sensing of ocean colour is a useful tool for this, but only monitors the surface layer. Ocean robotic platforms lack the synoptic coverage of satellites, but can monitor the subsurface. Combining satellite and ocean robotic monitoring offers huge potential for understanding and predicting changes in phytoplankton biomass. Historically, empirical functions have been used to describe the vertical structure of chlorophyll-a (a measure of phytoplankton biomass) and to extrapolate the surface measurements from satellite to depth, for applications like quantifying primary production. These include Gaussian functions, sigmoid functions, and statistical methods. Additionally, empirical approaches have been proposed to derive the vertical structure of phytoplankton size classes and taxonomic groups, but few methods have considered this in the context of what communities of phytoplankton a satellite can and cannot see, and in the context of vertical changes in epipelagic biogeography. Here, we describe an approach to partition a vertical profile of chlorophyll-a concentration into contributions from two communities of phytoplankton: one (1) that resides principally in the turbulent mixed-layer of the upper ocean and is observable through satellite visible radiometry; the other (2) living below the mixed-layer, in a stable stratified environment, hidden from the eyes of the satellite. The approach is tuned to a time-series of profiles from a Biogeochemical-Argo float in the northern Red Sea, and extended to reproduce profiles of particle backscattering, by deriving the chlorophyll-specific backscattering coefficients of the two communities and a background coefficient, assumed to be dominated by non-algal particles in the region. Analysis of the float data reveals contrasting phenology of the two communities, with community 1 blooming in winter and 2 in summer, community 1 inversely correlated with epipelagic stratification and 2 positively correlated. We observe a dynamic and variable chlorophyll-specific backscattering coefficient for community 1 (stable for community 2), positively correlated with light in the mixed-layer, suggesting seasonal changes in photoacclimation and/or taxonomic composition within community 1. The approach has potential for monitoring vertical changes in epipelagic biogeography and for combining satellite and ocean robotic data to yield a three-dimensional view of phytoplankton distribution.
Mesoscale vortices, or eddies, are ubiquitous energetic features whose potential to alter the biogeochemical regimes of the oceans arise, among others, from their capacity to blend large-scale gradients (eddy stirring), to isolate and transport water masses over large distances (eddy trapping) and to locally shallow or deepen isopycnals (eddy pumping) [1]. While many studies have been dedicated to highlighting and deciphering these mesoscale biogeochemical mechanisms in the open ocean, the difficulties affecting altimetry products in the nearshore area, constitute a strong barrier to the observation-based characterization of biogeochemical eddy dynamics near the shelf-slope.
Because of their transitional nature, capturing observational snapshots of eddies with a satisfactory degree of horizontal and vertical coverage is challenging. To overcome this difficulty, the method of composite analysis consists of gathering a large number of near-eddy data instances (from observation or model results) and exploring the variability of their local anomalies according to their relative position to eddies. The method thus aims at characterizing average eddy-induced perturbations and provided the basis for many of the recent advances in eddy biogeochemical studies [2].
The BGC-Argo program obviously provides a powerful asset for eddy composite studies, which derives from 1) the large availability of data provided under the hood of common technical protocols, 2) the richness of characterized biogeochemical variables, and 3) the continuity of data acquisition which facilitates the characterization of local anomalies.
The first necessary step to any eddy composite analysis lies in the identification and mapping of mesoscale eddies from remote sensing altimetry data products, in order to provide the required information to express in-situ data in eddy-relative coordinates. Typically, this constitutes a critical step in the near-shore domain, where altimetric products are challenged and may finally limit the outcome of downstream composite analysis attempts.
Here, we evaluate different altimetry data sets derived for the Black Sea (2011-2019) and compare their adequacy to characterize eddy-induced subsurface oxygen and salinity signatures by applying a common composite analysis framework exploiting in-situ data acquired by BGC-Argo profilers.
The identification of eddies locations, contours, and properties was obtained by applying the same py-eddy-tracker procedure [3] to three altimetric sets, that differ in terms of along-track preprocessing, optimal interpolation procedure (gridding), and spatial resolution. To complement the comparison, the same procedure was applied to equivalent model products issued from the CMEMS BS-MFC framework [4]. Oxygen and salinity subsurface anomalies were obtained from BGC-Argo profiles, by applying a temporal high-pass filter to the original time-series, and relocated in eddy-centric coordinates specifically for each altimetric product.
The most recent altimetric data set, prepared with a coastal concern in the frame of the ESA EO4SIBS project, provides statistics of eddy properties that, in comparison with earlier products, are closer to those obtained from model simulations, in particular for coastal anticyclones.
More importantly, the eddies subsurface signatures reconstructed from BGC-Argo are more consistent when the EO4SIBS data set is used to relocate the profiles into the eddy-centric framework, in sense of the spatial structure and statistical significance of the obtained subsurface mean anomaly.
We propose that the estimated error on the reconstructed mean anomaly may serve as an argument to qualify the accuracy of gridded altimetry products and that Argo and BGC-Argo data provide a strong asset in that regard.
Besides, the method allowed us to reveal intense subsurface oxygen anomalies associated with the Black Sea near-shore anticyclones, whose structure supports the hypothesis that the contribution of mesoscale circulation to the Black Sea oxygen cycles extends beyond oxygen transport processes and involves net catalytic effects on biogeochemical processes.
[1] D. J. McGillicuddy, (2016) Mechanisms of Physical-Biological-Biogeochemical interaction at the oceanic mesoscale, Ann. Rev. Mar. Sci., 8, 125–159.
[2] P. Gaube, D. J. McGillicuddy, Jr, D. B. Chelton, M. J. Behrenfeld, P. G. Strutton, (2014) Regional variations in the influence of mesoscale eddies on near-surface chlorophyll, J. Geophys. Res. C: Oceans, 119, 8195–8220.
[3] E. Mason, A. Pascual, J. C. McWilliams, (2014) A new sea surface Height–Based code for oceanic mesoscale eddy tracking, J. Atmos. Ocean. Technol., 31, 1181–1188.
[4] Ciliberti, S. A., et al. (2021) Monitoring and Forecasting the Ocean State and Biogeochemical Processes in the Black Sea: Recent Developments in the Copernicus Marine Service. Journal of Marine Science and Engineering, 9(10), 1146.
We present the results of joint satellite and BGC-Argo assimilation in an operational modelling application of the Mediterranean Sea biogeochemistry. Taking advantage of the frequent, large-scale satellite observations related to the microbial biology of the upper ocean, the assimilation of satellite ocean-colour observations in marine biogeochemical modelling has been successfully applied in recent years at both global and regional scales, with chlorophyll being the most commonly assimilated variable. On the other hand, a novel operational framework of biogeochemical observations has been recently introduced by BGC-Argo floats, which provide valuable insights into key ocean vertical biogeochemical processes. In the present work, we updated an existing 3D variational assimilation scheme to assimilated both satellite and BGC-Argo observations, and the assimilation of different combinations of satellite chlorophyll data and BGC-Argo nitrate and chlorophyll data has been tested in the framework of operational modelling of the Mediterranean Sea biogeochemistry. The simulations have been validated with respect to available independent non-assimilated and assimilated (before the assimilation) observations, showing that assimilation of both satellite and float observations outperformed the assimilation of the two platforms considered individually. Moreover, the joint multi-platform assimilation demonstrated to have impacts on the vertical structure of nutrients and phytoplankton, e.g., on the depth of the deep chlorophyll maximum and of the nutricline, with impacts of the assimilation directly linked to the sampling frequency and dimension of the BGC-Argo network. Thus, considering the perspective of the BGC-Argo network matching the consolidated importance and relevance of satellite observation assimilation, the multi-platform assimilation can improve model representation of both large-scale to mesoscale features and be beneficial for robust reconstruction in global and regional reanalysis. At the Mediterranean Sea scale, the outcomes of the model simulation assimilating both satellite data and BGC-Argo data provided a consistent three-dimensional picture of the basin-wide differences in vertical features associated with summer stratified conditions. Indeed, the assimilated model results described a relatively high variability between the western and eastern Mediterranean, with thinner and shallower but intense deep chlorophyll maxima associated with steeper and narrower nutriclines in the western Mediterranean.
TRISHNA: AN INDO-FRENCH SPACE MISSION TO STUDY THE THERMOGRAPHY OF THE EARTH AT FINE SPATIO-TEMPORAL RESOLUTION
J.-L. Roujean (1), B. Bhattacharya (2), P. Gamet (1), M.R. Pandya (2), G. Boulet (1), A. Olioso (3),
S.K. Singh(2), M. V. Shukla(2), M. Mishra(2), S. Babu(16), P. V. Raju(15), C.S. Murthy(15), X. Briottet (4),
A. Rodler (5), E. Autret (6), I. Dadou (7), D. Adlakha(2), M. Sarkar(2), G. Picard (8), A. Kouraev (7), C. Ferrari (9), M. Irvine (10), E. Delogu (11), T. Vidal (12), O. Hagolle (1), P. Maisongrande (11), M. Sekhar(14), K. Mallick(13)
(1) CESBIO, Toulouse, France (9) IPGP, Paris, France
(2) SAC, ISRO, Ahmedabad, India (10) INRAE, Bordeaux, France
(3) INRAE, Avignon, France (11) CNES, Toulouse, France
(4) ONERA, Toulouse, France (12) ACRI, Toulouse, France
(5) CEREMA, Nantes, France (13) LIST, Luxembourg
(6) LOPS, Plouzané, France (14) IISC, Bengaluru
(7) LEGOS, Toulouse, France (15) NRSC, ISRO, Hyderabad
(8) IGE, Grenoble, France (16) SPL, ISRO, Trivandrum
ABSTRACT
TRISHNA (Thermal infraRed Imaging Satellite for High-Resolution Natural resource Assessment) is a cross-purpose high spatial and temporal resolution thermal infrared Earth Observation (EO) mission that will provide observations in the domains of terrestrial and marine ecosystems, inland and coast, urban, cryosphere, solid Earth and atmosphere. It is an Indo-French innovative polar-orbiting mission that will overcome the limitations of TIR-optical observations from Landsat series and ASTER. The high-quality optical-thermal imagery will be used to provide precise surface temperature, emissivity and albedo of vegetation, manmade structures, snow, ice and sea. Atmospheric fields such as cloud mask and type, aerosol load, water vapour content will be well described. TRISHNA products will improve our knowledge of radiative and heat transfer to quantify evapotranspiration, fresh water discharge, snow-melt runoff, bio-geochemical cycle, urban heat island.
Energy transfer and exchanges of water and carbon fluxes in the soil–vegetation–atmosphere system need to be well described to enhance the role of environmental biophysics. Climate indicators include the surface temperature, the ocean heat, the glaciers and the Arctic and Antarctic sea ice extent. Land surface temperature (LST) and land surface emissivity (LSE) are Essential Climate Variables (ECV) (Global Climate Observing System/GCOS). LST is defined as the radiative skin temperature and is useful in agriculture (plant growth, water stress, crop yield, precision farming, early warning, freezing scenarios), hydrology (water cycle, catchment water, etc), and meteorology. LSE differentiates the surface attributes (vegetation, soil, snow, water, rock, manmade material) composing the landscape.
Water use in agriculture represents 70 % of global resources, making sustainable irrigation a key issue. Automatic detection and mapping of irrigated farmland area is vital for many services in charge of water management. In that respect, TIR signal brings, in addition to visible and near infrared, key information on irrigated areas that display lowest LST values at the peak of growth. The global change imposes an implementation of more efficient irrigation practices at the scale of an agricultural plot for better control. The decrease of moisture within the soil after water supply can be evaluated from the surface moisture estimated by radar but TIR observations remain better-suited to monitor vegetation water stress and irrigation at the agricultural plot to adapt the proper needs of each of the cultures. With a pixel size of 57 m, a revisit of 3 days at noon-night time, 6 VNIR (blue, green, red, NIR, water vapor, cirrus) and 4 TIR Bands in 8 – 12 m, TRISHNA will bring new insights. High resolution thermography is of broad interest for manifold domains like coastal area, inland water, urban areas, cryosphere, solid Earth and atmosphere. With a launch in 2025, TRISHNA will pioneer fine and routine collection of TIR scenes of the entire Earth owing to its instrumental design which proposes acquisitions based on across track scanner for 4 TIR channels (8,6, 9.0, 10.6 and 11.6 µm) and on push-broom.for 7 VNIR channels (485, 555, 670, 860, 910, 1380 and 1610 nm).
TRISHNA mission will look at early warnings of water scarcity and fire risk, including optimization for agricultural irrigation, rainfed crops, water consumption and its partitioning, plus food security. LST is a surrogate for soil water, near-ground air temperature and indirectly for productivity. TRISHNA will help to monitor any reduction and increase in evaporation (E) and transpiration (T). Evapotranspiration (ET) is an ECV that reflects the satisfaction or not of plant water needs. Its accuracy and timeliness is central as ET governs soil moisture at the surface and in the root zone through water extraction by plants, with large consequences for infiltration, runoff and for the whole catchment water budget. The detection of a water stress, deduced from a temporal chronicle, is useful to manage irrigation or to warn of the potential lack of water threatening ecosystems.
A main objective is to describe mixing processes and water quality in the coastal areas and estuarine from high-resolution SST (Sea Surface Temperature), also to enhance productivity and biodiversity assessment on the coast and for rivers and lakes including warning and monitoring of water borne diseases, to estimate energy fluxes in alluvial plains and aquifers, and to describe hazards (river floods, storm surges, inundated vegetation) in link to sea level rise.
Another main objective is to predict and to quantify at short-term the effects of the Urban Heat Island (UHI) on population health (Figure 5). A spectral un-mixing method was developed to provide sub-pixel abundances and temperatures from radiance images. Main goal is to improve LST retrieval due to enhanced footprint, to account for cavity anisotropy and environment effects, and to relate LST to air temperature. Urban areas are formed by complex 3D structures of mixed materials (cement, steel, bituminous roads, stones, bricks, glass, wood, grass, etc) and LST can vary locally by a few K. Topics are the modeling of urban climate, hydrology, building stress, storm water flow and generally UHI.
Key scientific question for the cryosphere concerns the combination of thermal and optical high resolution data to improve the prediction of the energy budget and the melting of snow and ice covered areas. In this regard, LST will be of added-value and how it may capture small-scale variability in mountainous areas is a relevant issue. Other issue will concern the mapping of debris-covered glaciers, lake ice formation and development, and lake water dynamics.
Snow is a good thermal insulator regulating soil temperature and sea-ice growth. Where the soil is permanently frozen (permafrost) presence of snow is critical for the preservation of the carbon storage in the soil. Besides, the evolution of the snowpack is a driver of the hydrological cycle in many watersheds.
Thermal anomalies may serve to anticipate volcano eruption and TIR measurements may help better characterizing volcanic ash clouds, geothermal resources, coal fires. Moreover, topography and roughness affect the surface energy balance of the solid Earth. Main properties are grain size, porosity, water content and composition. Soil temperature is a primary tracer of energy and water exchanges between the surface and the ground.
Downwelling shortwave and longwave radiation at noontime will be quantified under all-sky conditions through retrieval of aerosol optical depth, columnar precipitable water, cloud mask, cloud type / phase and albedo using seven optical and four TIR bands. In addition, surface air temperature and dust aerosol index using TIR data will be developed. These will be used for understanding aerosol-cloud interaction, to improve NWP model skills, agro-meteorological applications and air quality monitoring. Assimilation experiments will be carried out primarily with different land surface and atmospheric products as well as radiance assimilation to evaluate skill of NWP model weather forecast at various space-time scales.
CalVal is an important component of TRISHNA program. It consists in developing strategies to support the validation of TRISHNA Level 1 and 2 products, namely LST, LSE and ET. Efforts concern the data preprocessing to remove or minimize turbulent and directional effects that jeopardize the specification of 1 K on LST. Micrometeorological stations in different climates are equipped of TIR OPTRIS cameras and flown by UAV. Metrics and statistics are proposed as criteria for inter-comparison.
Cloud mask, atmospheric and anisotropy corrections (relief, directionality) are under development. Methods for measuring LST and LSE use consolidated approaches such as TES (Temperature-Emissivity Separation). Such key variables along with ET product will include accuracy assessment and a Quality Flag. ATBD are in preparation, notably the method of LST normalization. TRISHNA products will have global users such as FAO, GEOGLAM, global water watch, for meeting several Sustainable Development Goals (SDGs) as outlined by United Nations. Moreover, TRISHNA activities serve the preparation of future fine resolution TIR missions such as LSTM and SBG.
Evapotranspiration (ET) is a fundamental element of the hydrological cycle which plays a major role on both surface water and energy budgets. At local scale, ET can be estimated from detailed ground observations, for example using flux towers, but these measurements are only representative of a very limited number of homogeneous areas. When regional information is required, e.g. for monitoring ground water resources, ET can be mapped using thermal infrared and spectral reflectance data. Various ET models have been developed, but they often provide estimations within a large range of variations, suggesting high uncertainties in ET estimates.
We have developed the EVASPA (EVApotranspiration monitoring from SPAce) framework for estimating ET together with an estimation of its uncertainty. EVASPA is based on a multi-model multi-data ensemble system that provides maps of ET, global uncertainty and the contribution of each factors (models, input data) to the global uncertainty. EVASPA includes different procedures (or models) for estimating ET based on evaporative fraction formulations of the surface energy balance equations and/or based on aerodynamic equations. The system requires various data, such as surface temperature, solar radiation, air temperature, wind speed, surface albedo, leaf area index (LAI)…, as inputs. EVASPA considers several sources of data for each model input and the variability of input data is used to estimate input uncertainties. Overall, ET estimates and uncertainties are obtained by averaging and analyzing the variability of the different calculations of ET based on the different models and the different inputs.
In this study, EVASPA was applied to airborne data acquired over the Grosseto area in Italy in the frame of the ESA SurfSense experiment (high spatio-temporal Resolution Land Surface Temperature Experiment) in support of the LSTM mission project (Land Surface Temperature Monitoring). Surface temperature data were estimated using the Temperature Emissivity Separation (TES) method and considering different sources of atmospheric profiles of temperature and vapor. The TES method was applied on LSTM-like data which were constructed from the multispectral measurements performed with the TASI instruments. Meteorological data were obtained from different re-analysis products and ground measurements. Albedo and LAI were derived from reflectance measurements with the HyPlant instrument using different algorithms.
This analysis showed that the uncertainties on ET estimations might be very large up to 4 mm d-1 for the aerodynamic models and 3 mm d-1 for the evaporative fraction models. However, a better screening of inputs data and model formulation validity made it possible to decrease uncertainties down to 3 mm.d-1 (aerodynamic models) and 1.5 mm.d-1 (evaporative fraction models). Main uncertainty sources were related to solar radiation estimates, ground heat flux estimates, model formulations, in particular for the calculation of the evaporative fraction and the parameterization of the roughness length for heat transfer (aerodynamic models). In the case of aerodynamic models, wind speed and air temperature had also a significant impact, which explains the higher uncertainties for these models. In the frame of the development of future thermal infrared missions such as LSTM or TRISHNA, it is important to notice that the sensitivity of ET estimates to the uncertainty in the derivation of surface temperature from the thermal infrared data was lower than the impact of the other sources of uncertainties.
Multispectral thermal infrared data (TIR: 8-12 micron) are widely used to produce a variety of critical long-term science data records such as Land Surface Temperature and Emissivity (LST&E), and Evapotranspiration (ET). The ECOSTRESS TIR mission launched in mid-2018, and upcoming TIR missions including LSTM, TRISHNA, and the NASA Surface Biology and Geology (SBG) in 2025-2028 will bring a golden age of high spatial resolution (< 100 m), multispectral TIR data, with a potential for a twice-daily global revisit. NASA’s SBG will include both a TIR and VSWIR instrument and is a core component of NASA's new Earth System Observatory (ESO) to improve our understanding of vegetation processes, aquatic ecosystems, urban heat islands and public health, snow/ice, and volcanic activity. In this study we explore the use of ECOSTRESS TIR data in urban heat science and applications. Rapid 21st century urbanization combined with anthropogenic climate warming are significantly increasing heat-related health threats in cities worldwide, and partnerships between city policymakers and scientists are becoming more important as the need to provide data-driven recommendations for sustainability and mitigation efforts becomes critical. We will highlight the use of ECOSTRESS TIR data in monitoring the variability of intra-urban heat islands during extreme heat events over the diurnal cycle, for pinpointing hotspot locations in cities to optimize urban heat mitigation interventions such as cool roofs, cool pavements, cooling centers, and urban greening; and to better understand the thermal properties of urban man-made materials relative to the urban biosphere.
Landsat-9, launched on September 27, 2021, and Landsat-8, launched on February 11, 2013, both carry on-board versions of the Thermal Infrared Sensors (TIRS). The TIRS instruments are very close copies of each other; two spectral bands, pushbroom sensors with three Sensor Chip Assemblies (SCAs) that cover the 15-degree field-of-view. Each spacecraft has a 16-day revisit time, and the two are placed in orbits eight days offset from each other. Modifications were made to Landsat-9 TIRS-2 to upgrade it to a Class-B mission, meaning it has additional redundancies. Also, baffling was added to the Landsat-9 TIRS-2 telescope to mitigate the stray light issue that has plagued Landsat-8 TIRS.
The radiometric performance of the TIRS instruments is monitored using the on-board variable temperature blackbody and views of deep space. Maneuvers to look at and around the moon have provided an assessment of the stray light. The absolute calibration is monitored by vicarious calibration teams at NASA/Jet Propulsion Lab and the Rochester Institute of Technology.
The responsivity of the Landsat-8 TIRS instrument has been degrading over time since November 2020. An apparent contaminant is slowly building up on the focal plane changing the detector’s responsivity nonuniformly across the focal plane. The responsivity has dropped by ~2.5% in Band 10 and ~5% in Band 11, though through updates to the calibration parameters, the image products should remain calibrated to within 0.5%.
Landsat-9 completed a three-month commissioning phase in January 2022. The radiometric performance and initial absolute calibration were assessed. The instrument stability is monitored with the blackbody data over multiple time frames. The noise performance of the instrument is monitored using blackbody data at multiple set-point temperatures.
This paper will cover the recent radiometric performance assessments for both the Landsat-8 TIRS and Landsat-9 TIRS-2 instruments.
Viewing and illumination geometry are known to have significant impact on the remotely sensed retrieval of land surface temperature (LST). Differences appear greatest for areas with mixed components contributing to the pixel-integrated signal, as well as to the amount of shadowing.
Radiative transfer models have been used to assess and, in some cases, adjust for these directional effects on remotely sensed LST, with the aim typically of delivering direction-independent equivalent values. However, use of such models in many cases remains under-evaluated against in-situ data, due in part to the difficulties of retrieving data for the different components in a scene at a variety of different viewing and illumination geometries over a time period where the real surface temperature and sun-sensor geometries are invariant. With LST now classified as an Essential Climate Variable, it is imperative further work is done to ensure these directional effects are well understood and where possible accurately accounted for, particularly when considering any future satellite mission design (e.g. LSTM, SBG, TRISHNA).
To address this issue, a joint ESA-NASA funded airborne campaign (SwathSense) was conducted in summer 2021 – focused on collecting a unique multi-geometry set of airborne and in-situ data over agricultural and urban sites in the UK and Spain.
In the UK, NASA-JPL’s long-wave infrared (LWIR) state-of-the-art Hyperspectral Thermal Emission Spectrometer (HyTES) was flown on a UK research aircraft alongside Specim's FENIX 1K visible and shortwave infrared (VIS-SWIR) hyperspectral imager. In Spain, flights were conducted over agricultural regions with the LWIR hyperspectral TASI imager and VIS-SWIR CASI imager in collaboration with the LIAISE (Land surface Interactions with the Atmosphere over the Iberian Semi-arid Environment) campaign. In-situ field measurements from over fifty sensors – including unmanned aerial vehicles (UAVs) and a multi-angle goniometer equipped with a wide-angle field-of-view thermal camera – were collected to enable assessment of the remote sensing LST retrievals and to understand the extent of any real LST change over the period of the airborne data collections.
We provide an overview of the SwathSense campaign, review the current results from the HyTES, TASI and in-situ sensor analysis, as well as preliminary analysis from the campaign and the implications that these may have for future satellite missions.
Satellite measurements of thermal infrared emission (TIR) provide key diagnostics into the water use of crops and natural ecosystems. Plant growth goes hand-in-hand with the loss of water through the stomata as the leaves take in carbon dioxide. When this water evaporates the required energy is drawn from the leaf surface resulting in a cooling effect and latent heat flux into the surrounding air. Together with evaporation of surface soil water this forms the main terrestrial source of latent heat flux into the atmosphere.
Various algorithms have been developed to exploit the measured thermal impact on land surface temperature and provide diagnostic estimates of the land evaporation (E). Besides TIR they typically require basic visible and near-infrared reflectance to factor in the vegetation coverage and partition E in its source components. At the continental scale these products provide modelers with constraints to the water and energy cycle in an area with high uncertainty in earth system models: at the interface between surface and atmospheric models. Provided the spatial resolution is sufficiently high (100 m or better), the resulting products can provide managers early information on crop condition and overall ecosystem health, or a means to systematically monitor crop water use over large domains. It is especially this last application of TIR observations that is the most demanding in terms of both spatial and temporal resolution.
Fortunately, the availability of high-resolution thermal imagers to capture croplands functional responses is slated to improve dramatically in the coming years thanks to continued Landsat and Sentinel programs, as well as new missions like NASA’s Surface Biology and Geology (SBG) and ESA’s Copernicus Land Surface Temperature Monitoring (LSTM). With this prospect in mind, the science community will benefit from integrating these TIR observations into harmonized land surface temperature products. This requires fundamental understanding of the characteristics of each instrument and on-orbit performance.
This presentation will focus on research to support the adoption of thermal infrared imagery for providing improved monitoring of water use, drought resilience, and its integration within hydrological model frameworks. We will present analysis of the spatial resolving power of current thermal imagers and implications for temperature and evaporation retrieval. Bridges were found to provide a sufficient thermal contrast with the water surface to quantify the line-spread function of thermal imaging systems. The full-width-at-half-max of a gaussian beam model fitted to this transect quantifies the on-orbit spatial resolution of different imagers. This method is used to characterize the spatial resolution of the thermal bands of Landsat (7, 8, and 9) and the ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS). The analysis also investigates spatial sample size growth vesus scan angle. The goal of this research is to facilitate an improved fusion of current and future satellite observations into harmonized products with superior temporal and spatial characteristics.
This presentation will set the scene for the GEOGLAM session by providing a high-level overview of GEOGLAM community progress and discuss where we are heading in the next decade.
The first GEOGLAM Crop Monitor was launched almost ten years ago in response to the G20 policy mandate to support global commodity market transparency. The mandate has since expanded to include food insecure regions of the world, with the launch of the Crop Monitor for Early Warning in 2016. More recently work has been extended to the co-development of national crop monitoring systems in less developed nations. Now, as GEOGLAM looks to the future, it has implemented community activities focused on capacity co-development, defining a set of Essential Agricultural Variables (EAV’s), and addressing gaps in open access to in situ data. Together our collective action is addressing policy challenges through the provision of improved EO based information for decision makers at multiple levels, understanding that better information results in better decisions.
As we reflect, the first decade of progress has positioned GEOGLAM to make an even greater contribution to some of the major policy challenges of our time. These include the Sustainable Development Goal’s, climate mitigation, climate adaptation, and disaster risk reduction. Consequently, this is an appropriate time to celebrate and take stock of our progress, while focusing on our response to these emerging challenges.
The G20 GEOGLAM Crop Monitor has been a source of open, timely, and science-driven information on global crop conditions in support of market transparency and early warning of production shortfalls since 2013. Starting with the Crop Monitor for the Agricultural Market Information System (AMIS), which monitors crop conditions across the major producing and exporting countries of the world, the Crop Monitor initiative expanded to cover food-insecure regions of the world in 2016 with the launch of the Crop Monitor for Early Warning. More recently the Crop Monitor has been scaled to support regional and national monitoring systems that have been co-developed in partnership with national ministries and regional agencies that now house these programs. In the last ten years there have been significant advances in the field of agricultural monitoring and the transition of research into operational systems directed at decision support. Looking ahead, we are faced with new impending threats to global agricultural production and food security in the form of increasing frequency and severity of weather extremes and changes in weather patterns due to climate change that are poised to exert the greatest impact on regions already vulnerable to food insecurity, increasing conflict and insecurity, and supply chain disruptions. These are challenges that reinforce the need for strong action from the research community to continue to increase the adoption of operational technologies to support the capacity for governments and the humanitarian community to work proactively to mitigate impacts. While much progress has been made there are still information gaps that the GEOGLAM Crop Monitor community are prioritizing as we look ahead to the next decade including improved cropland and crop type maps, ground truth validation, agricultural impact assessments, long lead weather forecasting and extended outlooks, in season yield forecasting, and continued engagement at the national level. Across these research topics, there is an increased emphasis on connecting information from the field to global scales to strengthen decision support and increase the reliability, accuracy and timeliness of agricultural assessments.
GEOGLAM's Earth Observation Data Coordination Team’s efforts have resulted in considerable advances in acquisition, accessibility, and ease of use of satellite data for agriculture monitoring. As policy frameworks have multiplied, and our G20 mandate has expanded, so too have the observation and product requirements to meet the demand for information. Critical to our mandate is the ability to measure state and change of agriculture, as well as to forecast what is yet to come, in the face of climate change, food insecurity, and land degradation. Following the model of the Essential Climate Variables (ECVs), GEOGLAM has developed a set of Essential Agriculture Variables (EAVs) that are driven by user and policy needs for agriculture information. There are 14 “Top Priority User-Facing Variables” (TPUFVs) that are intended for non-remote sensing audiences (policy and decision support), 21 “Supporting Variables” that underpin analysis to generate the TPUFVs, and 6 “ECVs for Agriculture” that are similar to the Supporting Variables but build synergy with the existing ECVs. Outputs of the GEOGLAM EAV development process include a) specifications (definitions, application relevance, spatial unit of product output (e.g. field or administrative unit), frequency of update, and error assessment requirements), b) a process for identifying/labeling satellite-derived products as compliant with GEOGLAM EAV specifications, c) updated Earth observation requirements, and d) an analysis of gaps and barriers to EAV product generation. Gaps and barriers include data acquisition, data access, methodological development, in situ data for cal/val, training/capacity, and computational power.
This talk will briefly outline the specifications and focus mainly on progress to date while illustrating linkages with existing GEOGLAM-contributing projects, including Sen2Agri, Sen4CAP, Sen4Stats, NASA Harvest GLAM, ESA World Cereals, and more. It will also outline linkages to other GEOGLAM efforts, including Capacity Development, Computation/IT, and In Situ Data Coordination, and provide the vision for the next 10 years of GEOGLAM’s EO data coordination work.
CropWatch Cloud is for multi-scale crop monitoring platform. It provides cloud-based service including modules, area of interest, indices, user interface customization for users and stakeholders by encapsulating and publishing the CropWatch indicators, method and model as web application programming interfaces (APIs) can be composed to execute user specific analyses. Analyses are automatically parallelized to run across many CPUs in the Ali cloud, drastically decrease the amount of time necessary to complete the computations. The CropWatch Cloud allows users to set up their own defined project/system by invoking available APIs including both algorithms and indicators directly to do crop monitoring for any regions of interest using a web browser or local integrated development environment.
CropWatch has successfully carried out capacity building on crop monitoring through module customization in Mozambique, Mongolia and Southeast countries. All indicators in CropWatch were customized, calibrated and made available from a district level of Mozambique. The Mozambique National Agro-Meteorological Bulletin has successfully incorporated the monitoring results from CropWatch Cloud on provinces and districts levels since June 2018. Cambodia and Viet Nam use also CropWatch for their own regional and national crop assessments. CropWatch also automatically feeds information to Thailand Agri-MAP. The drought monitoring functions of CropWatch has been customized as a ten-indicator DroughtWatch system in 2014 for Mongolia, who regularly disseminate drought monitoring bulletins to government bodies. DroughtWatch has also been localized and used in Cambodia, Myanmar and Sri Lanka. Recently, capacity building actions are carried out for 14 countries over Africa and Asia.
Supported by the data and information provided by CropWatch APIs, CropWatch Cloud also offers a website for users all over the world to make their own analyses, which greatly enhances the transparency of crop monitoring information. CropWatch also provides a timely update on crop information and fills the information gaps in developing world to support climate adaptation and policy making globally. CropWatch Cloud is an integrated platform designed to empower not only the common remote sensing scientists, but also a much wider user communities globally that lacks the technical capacity to conduct crop monitoring, especially for the developing countries.
The Asia-RiCE initiative (http://www.asia-rice.org) has been organized to enhance rice production estimates through the use of EO, and seeks to ensure that Asian rice crops are appropriately represented within GEOGLAM. Asia-RiCE is composed of national teams that are actively contributing to the Crop Monitor for AMIS and developing technical demonstrations of rice crop monitoring activities using both Synthetic Aperture Radar (SAR) data (Radarsat-2 from 2013; Sentinel-1 and ALOS-2 from 2015; TerraSAR-X, Cosmo-SkyMed, RISAT, and others) and optical imagery (such as from MODIS, SPOT-5, Landsat, and Sentinel-2) for 100x100km Technical Demonstration Sites (TDS) as a phase 1 (2013-2015) and wall-to-wall (2016-2018) in Indonesia, Vietnam and Cambodia in Asia with satellite –based cultivated area and growing stage map. The Asia-RiCE teams are also developing satellite-based agro-met information for rice crop outlook, crop calendars and damage assessment in cooperation with ASEAN food security information system (AFSIS) for selected countries (currently Indonesia, Thailand, Vietnam and Japan; http://www.afsisnc.org/blog), using JAXA's Satellite-based MonItoring Network system as a contribution to the FAO AMIS outlook (JASMIN) with University of Tokyo (http://suzaku.eorc.jaxa.jp/cgi-bin/gcomw/jasm/jasm_top.cgi). This paper describe the current status of Asia Rice team activity.
We will present Digital Earth Africa’s effort to produce a cropland extent map for the African continent and other Earth observation based tools and products to address food insecurity across Africa.
Digital Earth Africa (DE Africa) operates a digital infrastructure in Africa that aims to provide free and open access to Earth observation (EO) data and services to all, and build capacity across Africa to use EO based insights to address sustainable development challenges. The range of data and services provided by DE Africa, including analysis-ready surface reflectance, surface temperature, and synthetic aperture radar time series will support many types of crop monitoring applications. Our free analysis environment allows any user to access data through an Open Data Cube API and prototype applications before scaling up to regional or continental services.
The 10m resolution cropland extent map is DE Africa’s first continental service for agriculture and is expected to serve as a basis for higher level crop monitoring and management products. We have co-developed this service with our partners across the continent, including initial consultation and validation supported by GEOGLAM. Members of the regional African geospatial institutions (RCMRD, OSS, Afrigist, AGRHYMET, and NADMO) were instrumental in defining the specifications of the product, in developing and implementing a continental scale reference data collection strategy, and assisting with iterative model building.
We have classified cropland using an annual Sentinel-2 time series and a Random Forest machine learning model. The product comes packaged with three layers: a pixel-based classification, a pixel-based cropland probability layer, and an object-based segmentation filtered classification. All the components of the service: models, reference data, code, and results are open source and freely available online through Digital Earth Africa's mapping and analysis platforms.
Driven by the needs of the African community, DE Africa strives to provide better tools and services to support agriculture and food production in Africa. By early 2022, we expect to provide an operational monthly NDVI Anomaly service based on the Landsat and Sentinel-2 archives. We will start to develop methods and tools for crop type classification and will present our early results. We will also present our efforts on the collection and sharing of in situ and reference data to support application development.
Description :
How can Earth observations contribute to our understanding of tipping elements in the climate system and help with early warning of change?
Spanning research into tipping elements across the ocean, cryosphere and land domains, this session will include discussion of priorities for improving our understanding of the risk they pose and opportunities offered by remote sensing.
Panel:
Jan Verbesselt (Wageningen University)
Sebastien Bethiany (TU Munich)
Tim Lenton (University of Exeter.)
Didier Swingeouw (U. of Bordeaux)
Annett Bartsch ( B.GEOS)
Chair:
Wendy Broadgate (Future Earth)
Company-Project:
ESA
Description:
• The Atmosphere Virtual Lab adopts the concept of Exploitation Platforms and Cloud Based services. There is a strong focus on making sure that users can work with the vast amounts of satellite and groundbased data without having to download all data locally. Providing analysis environments inside cloud-based environments close to the data is an essential part of Atmosphere Virtual Lab.
• In these sessions these Atmosphere Virtual Lab functionalities will be demonstrated. Further, use cases of a wide selection of atmospheric science scenarios will show data processing and visualisation capabilities of the Atmosphere Virtual Lab and highlight how this allows users to explore datasets in an interactive manner.
Company-Project:
Cropix - SARMAP
Description:
Sentinel-1 satellites are independent of cloud cover and daylight and measure regularly under constant conditions with constant geometry and energy.
The changes we measure are plant development.
For the operational use of products derived from satellite data in the agricultural sector, it is essential that data is continuously available. In addition to the satellite data, trained employees, software and hardware are also required for each specific application.
Ultimately the whole system depends on the availability and quality of updated information from satellite data.
We have developed indices to transform the backscatter of Sentinel-1 into a biomass index and a moisture index comparable to the NDVI and NDWI.
Due to a low measurement noise, the data is particularly suitable for time series studies and change detection.
The data can be processed automatically and integrated into a monitoring system to make them directly accessible to the different stakeholders.
In the field of precision farming and crop insurance, there are interesting application possibilities to support decision making, statistical evaluations or detection of changes or anomalies.
It is time to come up with ground touching solutions, that are easy to perceive, scalable over regions and seasons and continuously available.
Company-Project:
EUMETSAT/ECMWF/ESA
Description:
• The training will present the state-of-the-art in atmospheric monitoring and modelling. It aims to provide an end-to-end overview of observations, remote sensing, modelling, data assimilation and applications; and to enhance the capacity to access and analyse data. The training course also aims to foster collaboration amongst participants.
• The training is centred on Jupyter Notebook presentations and also addresses the potential to improve access to data and enabling applications. The demo material will be accessible live to participants and will be freely available.
Description :
Beyond the traditional mission data access (i.e. data download), the panel would discuss the alternative possibilities envisaged by ESA for the BIOMASS and FLEX missions through the concept of Mission Algorithm and Analysis Platform (MAAP), including:- enable users to discover, visualize and analyse mission data (without downloading data), provide a variety of data in the same coordinate reference frame to enable mission data validation, allow users to modify and improve the mission data processing algorithms including reprocessing and validation capabilities (Product Algorithm Laboratory),- address intellectual property issues related to collaborative algorithm development and sharing of data and algorithms. In particular, the integration of contribution from the scientific community to the improvement of the core mission algorithms will be discussed.
Panelists:
• Thuy Le Toan - Centre d’Etudes Spatiales de la Biosphere - France
• Shaun Quegan - University of Sheffield - United Kingdom
• Jose Moreno - University of Valencia - Spain
• Prof. Dr. Uwe Rascher - Forschungszentrum Jülich - Germany
Description:
• The need for risk-informed adaptation action is urgent. More frequent and intense
extreme weather events are affecting particularly vulnerable and remote regions in the world. Human,
economic but also natural losses related to climate change are rapidly increasing. In response,
initiatives to address climate-induced risks and damages are on the rise. Technical, financial, and
political solutions look promising. However, current efforts tend to address adaptation as a finite goal,
with little done to consider its iterative nature or long-term sustainability.
What do we need from earth observation technologies to advance adaptation? How can we make
state-of-the-art tools available for those most in need to inform immediate action in adaptation and
its effectiveness? This session will bring together representatives of different stakeholder groups to
discuss and understand the challenges of making adaptation more sustainable and inclusive, and the
role of space technologies to reach this goal. The session will be divided into three thematic blocks,
addressing the following key questions:
1. How do satellite imagery and remote sensing inform iterative and sustainable adaptation
and risk management?
2. What tools and initiatives are available to manage risks and what can be further done for
decision-makers in the most vulnerable regions to access state-of-the-art technologies?
3. How can earth observation technologies help to localize climate action and public finance for
the most vulnerable?
Format: The Agora sessions at the Living Planet Symposium are designed to allow presentations and
discussions linking different fields of research and practice. The sessions will host moderated panels
with 4 experts and a duration of 60 minutes, including Q&A.
The United States National Academies of Sciences, Engineering, and Medicine 2017 Earth Decadal Survey recommended a new NASA “Designated” program element to address a set of five high-value Targeted Observables during the next decadal period. In response to that recommendation, and based on guidance from NASA’s Earth Science Division, a team has been formed to perform an architecture study associated with the Surface Deformation and Change (SDC) Targeted Observable. Surface deformation measurements are critical for studies related to earthquakes, volcanoes, landslides, and changes in groundwater levels and corresponding subsidence or uplift, as well as measuring ice sheet and glacial stability and their contributions to sea level rise, permafrost thaw, surface change. The Decadal Survey report recognizes this criticality for its Earth Surface and Interior objectives, and also for a number of Hydrology and Climatology objectives, identifying twenty-three surface deformation-related objectives throughout the report.
While the Decadal Report Surface Deformation and Change Targeted Observable is focused on surface geodesy (i.e. change in position of the surface), NASA has directed that the scope of the study include some architectures that intrinsically support research- and applications-grade measurements of observables such as soil moisture, vegetation structure, disturbance, agricultural monitoring, wetlands processes, coastal processes, ocean processes, sea ice hazards monitoring (e.g. icebergs and polar sea-lane variability). Based on this, the SDC architecture study will explore architectures that are optimized for phase-based geodetic performance and architectures that also support amplitude-based radiometrically accurate imagery. A science and applications traceability matrix (SATM) has been developed for this expanded set of geophysical observables and is online for public comments.
SDC’s observational requirements for many of these objectives cover a number of performance parameters such as spatial resolution, deformation precision and repeat interval. In reaching its final recommendation on a cost-effective strategy for the SDC observable, the Decadal Survey Committee presumed that the measurement implementation will involve Synthetic Aperture Radar (SAR) and Interferometric SAR (InSAR) technologies. The Decadal Survey references the NASA-ISRO SAR (NISAR) Mission design performance of 12-day repeat interferometry, and calls for shorter repeat cycle (sub-weekly to daily), potentially at the expense of spatial resolution if necessary to stay within the recommended development cost.
The SDC Study has three main objectives: 1) Identify and characterize a diverse set of observing architectures, including innovative observing systems that can disrupt the norm for interferometric SAR observations; 2) Assess the ability of each of the architectures to meet SDC objectives, including cost effectiveness; 3) Perform sufficient in-depth design study of one selected architecture to enable initiation of a Phase A concept study. To accomplish these objectives, the study team is engaging US national expertise in Earth Science research, applications, technology, mission formulation and implementation. The team comprises NASA centers with relevant expertise and is engaging the international community, government, academia, and industry.
The SDC architecture study will examine the research and applications benefits of the data sets derived from these existing and planned systems, which may be complicated by different data access modalities for various satellites, ranging from free and open data to commercial but restricted data sets. In this study, we are working with other agencies (space agencies and data sponsors), and commercial providers to understand and quantitatively assess the ways in which the variety of data can be applied to scientific research and other applications. The study team has developed a simulation tool to quantitatively assess the performance of the existing and planned SAR constellations, which will be considered as an observing system. This will allow research and applications community members to gain a more quantitative understanding of the critical gaps in our observations from the government Programs of Record and the commercial sector that NASA must fill to meet the SDC science and application objectives.
The results generated from the simulation tool and the SATM parameters form the inputs to the value framework (VF). The VF assesses the benefits, costs, and risks of each architecture for the science and applications communities, as captured in the Decadal Survey. The needs of the applications community are documented in a study report focusing on the entire value chain of non-research, Earth observation data users. The study found that the applications community will also benefit from an interferometric SAR with ~10m resolution, global coverage, and multi-polarization data with a weekly sampling plan collected over a decadal time frame. Some community members also expressed interest in multi-band observations. A community assessment report is being developed expanding this initial study.
In summary, the SDC Study commenced in October 2018 and is planned to run for five years. In the initial two years, community needs were collected and parsed into the SATM. Technology readiness and partnerships opportunities were assessed, and about forty architectures were identified. These architectures are now being evaluated against the needs, and a down-selection will be conducted this year using the value framework. The remaining time will be spent in more detailed studies of the down-selected architectures, with a final down-selection and report at the end in preparation for mission implementation. In this paper, we will describe the status of the study and how potential partners can become involved.
The National Aeronautics and Space Administration (NASA) and the European Space Agency (ESA) are both developing future L-band SAR missions to address key science questions and application needs relevant to solid Earth, ecosystems, cryosphere, and hydrology. The NASA-ISRO SAR (NISAR) mission is a dual-frequency L-/S-band SAR satellite scheduled for launch in late 2023 and recently identified as a pathfinder for the NASA Earth System Observatory. NISAR will acquire global dual-polarimetric L-band data with 20 MHz range bandwidth every 12 days (6 days ascending and descending), delivering unprecedented dense time-series at L-band and new Geocoded Single Look Complex products. The Radar Observation System for Europe at L-band (ROSE-L) is one of the ESA’s High-Priority Candidate Missions scheduled for launch after 2028 with the goal of augmenting the Copernicus constellation to address important information gaps and enhance existing Copernicus services and related applications. In the current design, ROSE-L is a two-spacecraft system that will operate in the Sentinel-1 orbit and be phased to achieve a repeat interval of 6 days.
Both NISAR and ROSE-L are designed to make repeated observations from a narrow orbital tube in order to generate time-series with nominal zero interferometric baselines. While this design choice has several benefits, it cannot address some of the measurements recommended by the 2017-2027 Decadal Survey for Earth Science and Applications. Two of these measurements are (1) 3D surface deformation vector and (2) vegetation vertical structure, for which long along-track and cross-track baselines, respectively, are required. NASA has been conducting dedicated studies to develop science and application traceability matrices (SATMs) as well as identify technology gaps and candidate architectures for Surface Deformation and Change (SDC) and Surface Topography and Vegetation (STV) measurements. The question is whether diverse science needs with potentially competing system requirements can be met more easily by coordinating international efforts for future SAR mission development.
This paper discusses concepts involving satellites flying in formation with satellites such as NISAR and ROSE-L in order to augment the observation capabilities of these missions through denser coverage, multi-squint or multi-baseline measurements. Additional satellites spaced in time in the same orbital plane as NISAR or ROSE-L can improve the temporal sampling density of each. Receive-only co-fliers are attractive thanks to their simplified hardware architecture and to the ability to coherently combine their images without relying on a tight cooperation with the mothership SAR satellite. The talk addresses challenges and opportunities of proposed free- and co-flier concepts for NISAR and ROSE-L by leveraging previous ESA’s SAOCOM-CS studies and current NASA’s SDC and DARTS IIP efforts in the context of on-going international collaborations between NASA and ESA.
Session: B6.01 National EO satellite missions
Building upon RADARSAT-1 heritage, RADARSAT-2 was launched in 2007 with added beam modes and polarizations that helped develop new operational applications and increased SAR data consumption within the Government of Canada by a factor of five (5). Launched in June 2019, the RADARSAT Constellation Mission (RCM) aims to ensure continuity of operational SAR imagery for RADARSAT-2 users, as well drawing from the constellation approach to enable new applications. Now more than one year of operations, the RCM is becoming the Canadian Government’s premier mission to provide all-weather day and night data in support of Canadian sovereignty and security, environmental monitoring, natural resources management and other government priorities such as Northern development. As a three-satellite constellation, it can cover most of Canada and its surrounding waters on a daily basis. Compared to previous RADARSAT missions, coverage increases significantly in Canada’s North, for example providing coverage of the Northwest Passage three to four times daily. With the increased frequency of revisit, emerging applications such as measurement of land deformation and operational disaster management can be further exploited.
The RCM is designed to respond to core needs, which at the highest level can be summarized as:
• Daily coverage of Canada's territorial and adjacent waters for maritime surveillance, including ship detection and monitoring of ice, marine wind, and oil pollution; and,
• Monitoring of all of Canada for disaster mitigation on a regular basis (monthly to twice-weekly) to assess risks and damage-prone areas; and,
• Regular coverage of Canada's land mass and inland waters, up to several times weekly in critical periods, for resource and ecosystem monitoring.
Introduction
In order to ensure the Italian leading role in the remote sensing sector, The Italian Space Agency intends to promote in the next years several technology developments in both active and passive space born sensors (through the acquisition of capabilities in new frequency bands as well as the process of miniaturization in traditional bands), the development of radar programs, such as COSMO-SkyMed Second Generation (CSG), GEOSAR (in collaboration with the Russian Federation space agency), PLATiNO-1 and P-Band.
To this aim several Italian industries will sustain and develop technologies associated with the PLATiNO, whose 1st mission will be deployed in 2022, mini multi-purpose standard platform which will be capable of embarking a whole range of payloads covering a wide set of programmatic sectors (such as those relating to Telecommunications, Earth Observation and Exploration just to name some) in scientific or applicative field. In a similar way several other initiatives will start in the course of this year including the development of missions and technologies for nano-satellites up to 25 kg.
COSMO-SkyMed Second Generation
The CSG program, started in 2009 in cooperation with the Ministry of Defense, guarantees to Italy a dual national infrastructure for "all-weather, 24/7 " satellite observation of the Earth. The deployment of the Second Generation of COSMO-SkyMed represents a technological leap in terms of performances and operational life of the system and consequently provides Italy with a leadership role at world level in the Earth Observation sector with SAR technologies. The first CSG satellite was launched in December 2019 while the launch of the second one is scheduled in January 2022. In the next few years the constellation will therefore be completed with the development of the following third and fourth.
GEOSAR
A joint Italian-Russian technical-scientific geosynchronous SAR Mission feasibility study is currently underway. The GEOSAR system is planned to be based on the use of the geosynchronous orbit for SAR applications. This highly innovative concept allows obtaining a new capacity complementary to the assets currently deployed in LEO, guaranteeing therefore a continuous availability of data in selected areas, allowing particularly promising applications in the field of monitoring and emergency management, agriculture, natural resources and meteorology.
PLATiNO-1
PLATiNO 1 will be the first mission using the multi-purpose, high-performance PLATiNO minisatellite platform. For the first Mission, the Agency will develop a compact radar for both bistatic and monostatic operation, with sub-meter resolution, in order to fill the growing market segment of low-cost compact SAR instruments for future constellations. The payload capitalizes what has been developed in Italy to date in the field of X-band SAR technology.
With a development phase started in 2017 the project is now completing the phaseC and its launch is planned by end of 2022.
Duration : 30 Minutes
Company-Project:
E-GEOS S.p.A - CLEOS
*****FOR THIS SESSION BRING WITH YOU YOUR LAPTOP OR TABLET*****
Description:
CLEOS Developer Portal offers a user-friendly interface and a set of APIs that allow CLEOS users with development knowledge to build their own services and developments by exploiting CLEOS capabilities and data.
CLEOS Developer Portal provides an hybrid cloud platform for:
• Access to multi-source and cross-platform EO satellite data and information
• The development of micro-services for data processing, exploiting dynamic coding interfaces and collaboration tools for the reuse of building blocks (LEGO logic).
• Training, benchmarking and lifecycle management of Artificial Intelligence models and training datasets.
• The deployment and operational management of data processing pipelines through DevSecOps processes
CLEOS Developer Portal Classroom Training will provide a hands on session to create the first processing pipeline in CLEOS. During the Classroom Training, attendees will learn more about the technology behind CLEOS Developer Portal and they will get inspired about the potential use of CLEOS Developer Portal to address heterogeneous use cases requiring both batch and stream processing capabilities.
Description:
Meet the Fire Burnt Area scientist' and interact with ESA's animated globe
Description:
ESA's Climate from Space web app enables users to visualise the evolution of our Earth System using timeseries produced by the European Space Agency's Climate Change Initiative. Access the tool at https://cfs.climate.esa.int
Description :
As observations become more sophisticated and models more complex, so too do the links between them. Rapidly expanding processing and data storage requirements bring new challenges for model evaluation and validation, and the need for tools, toolboxes and cross community collaboration.
This session addresses challenges and opportunities at the interface of the EO and modelling communities. ESA is taking a lead role in supporting this interface at an international level, reflecting its Member States’ leadership in climate monitoring and modelling. It hosts the coordinating office for the WCRP Coupled Model Intercomparison Project (CMIP) and is the CEOS-SIT chair.
Speakers:
•Eleanor O’Rourke (WCRP CMIP International Project Office)
Panel:
•Jörg Schulz (EUMETSAT)
•Irene Lake (Director, International Project Office for CORDEX)
•Susann Tegtmeier (University of Saskatchewan, ESMO Interim SSG Co-Chair)
•Carlo Buontempo (ECMWF)
•Axel Lauer (DLR)
•Simon Pinnock (ESA Climate Office)
Description:
The aim of this demo is to present to the audience features and functionalities of SNAP software to process S3 SLSTR data working on LST.
Company-project:
EODC/VITO/WUR/EURAC - openEO platform
Description:
Satellite data often needs preprocessing workflows to generate useable analysis ready data (ARD) and the associated workflows are mostly very compute intensive. openEO Platform simplifies these workflows for the user by running the processing on federated compute-infrastructures. Sentinel 1 GRD and Sentinel 2 Level 1 data both require specific processing environment such as SNAP or FORCE to create ARD.
This demo is dedicated to showcase the connection of the client to openEO Platform and the subsequent ARD processing workflow for Sentinel 1 and Sentinel 2 data. Moreover, basic processing functionalities such as reloading results, band math and displaying ARD will be shown.
Results may be calculated and displayed through multiple clients available in Python, R and Javascript. The demo includes the application of Python / JupyterLab as well as the interaction through the online WebEditor.
Description:
Human-induced climate change is causing dangerous and widespread disruption in nature, affecting the lives of billions of people around the world, according to the latest state of the climate report by the Intergovernmental Panel on Climate Change (IPCC) earlier this year.
While we can still work on mitigation strategies, let’s hear from three exceptional humans about how they adapted to extreme weather/social/work environment to discuss how/if humans will have to do the same in the near future.
Speakers:
-Luca Parmitano, ESA astronaut
-Alessandro Di Bella, CryoSat Mission Geophysicist
-Omar di Felice, Extreme ultracyclist
Five ESA Earth Explorer (EE) research missions have been prepared, developed, launched and successfully operated over the last decade, each providing new observations and scientific insights to the science community which are revolutionising our understanding of the Earth system. Meanwhile four new Earth Explorers EarthCARE (EE6), Biomass (EE7), FLEX (EE8) and FORUM (EE9) are undergoing development, and a further five mission concepts: candidate Harmony (EE10); and four new EE11 candidates - are presently undergoing competitive preparatory phase study activities.
Today ESA’s Future Earth Observation (FutureEO) Programme supports science-driven Research missions motivated by the EO EUROPE 2040 Earth Observation Strategy and the EO Living Planet Challenges, with the primary objective to deliver cutting-edge Earth Explorer missions, ground-breaking Earth observation capabilities, and excellence in Earth science. The categories of user-driven Research missions are: Earth Explorers, Missions of Opportunity and Scout Missions.
With a renewed strategic commitment to provide regular opportunities for new research mission proposals, in May 2020 ESA released a new Call for proposals for Earth Explorer 11 mission ideas. Upon completion of the scientific, technical and programmatic evaluation of all proposals coming from Europe's satellite remote sensing community, in June 2021 the four Earth Explorer 11 mission candidates CAIRT, Nitrosat, SEASTAR, and WIVERN were selected by ESA Member States to undergo competitive preparatory study activities. Each of these new science-driven Earth Explorer missions employs pioneering new observing techniques made possible by novel technology, enabling European scientists to make new Earth system discoveries and to fulfil the scientific ambitions and needs of the European Earth observation user community through the next decade.
The presentation will give an insight into the background and the journey of the four Earth Explorer mission proposals from submission to the present day, and will chart the next steps which the Candidates will take, en-route to selection of the Earth Explorer 11 mission for implementation in 2025. This talk sets the scene for the four mission presentations before placing the spotlight on each of the candidates: CAIRT, Nitrosat, SEASTAR, and WIVERN.
The nitrogen cycle has been heavily perturbed due to ever growing agriculture, industry, transport and domestic production. It is believed that we now have reached a point where the nitrogen biochemical flow has exceeded its planetary boundary for a safe operating zone. This goes together with a cascade of impacts on human health and ecosystems. To better understand and address these impacts, there is a critical need to quantify the global nitrogen cycle and monitor its perturbations on all scales, down to the urban or agricultural source. The Nitrosat concept, which was preselected recently in the framework of ESA’s Earth Explorer 11 call and is entering Phase0 activities, has for overarching objective to simultaneously identify the emission contributions of NH3 and NO2 from farming activities, industrial complexes, transport, fires and urban areas. The specific Nitrosat science goals are to: Quantify the emissions of NH3 and NO2 on the landscape scales, to expose individual sources and characterize the temporal patterns of their emissions. Quantify the relative contribution of agriculture, in its diversity of sectors and practices, to the total emissions of reactive nitrogen. Quantify the contribution of reactive nitrogen to air pollution and its impact on human health. Constrain the atmospheric dispersion and surface deposition of reactive nitrogen and its impacts on ecosystems and climate; and contribute to monitoring policy progress to reduce nitrogen deposition in Natura 2000 areas in Europe. Reduce uncertainties in the contribution of reactive nitrogen to climate forcing, atmospheric chemistry and interactions between biogeochemical cycles. To achieve these objectives, Nitrosat would consist of an infrared Imaging Fourier Transform Spectrometer and a Visible Imaging Pushbroom Spectrometer. These imaging spectrometers will measure NH3 and NO2 (respectively) at 500 m, which is the required spatial scale to differentiate, identify and quantify the main point and area sources in a single satellite overpass. Source regions would be probed from once a week to once a month to reveal the seasonal patterns. Combined with air quality models, assimilation and inverse modelling, these measurements would allow assessing the processes that are relevant for the human disruption of the nitrogen cycle and their resulting effects, in much more detail than what will be achieved with the satellite missions that are planned in the next decade. In this way, Nitrosat would enable informed evaluations of future policies on nitrogen emission control. This presentation will detail the mission concept, provide first results from the Phase 0 scientific studies and from supporting aircraft campaigns.
To improve our knowledge of the coupling of atmospheric circulation, composition and regional climate change, and to provide the urgently needed observations of the on-going changes and processes involved, we have proposed the Changing-Atmosphere Infra-Red Tomography Explorer (CAIRT), selected for Phase 0 as one of four candidates for Earth Explorer 11. There is growing evidence that the global atmosphere is changing throughout its entire depth from the surface to the fringes of space due to anthropogenic emissions of greenhouse gases, pollutants, aerosol precursors, and the recovery from ozone-depleting substances. Changes in atmospheric composition are closely coupled with changes in circulation and together affect surface climate, weather and air quality. CAIRT will be the first limb-sounder with imaging Fourier-transform infrared technology in space. By observing simultaneously the atmosphere from the troposphere to the lower thermosphere (about 5 to 115 km altitude), CAIRT will provide global observations of ozone, temperature, water vapour, as well as key halogen and nitrogen compounds. Observing nitrogen oxides from the stratosphere up to the lower thermosphere will help to better constrain the coupling with the upper atmosphere, solar variability and space weather. Observation of long-lived tracers (such as N2O, CH4, SF6, CF4) will provide information critical on transport, mixing and circulation changes. CAIRT will deliver essentially a complete budget of stratospheric sulfur (by observations of OCS, SO2, and H2SO4-aerosols), as well as observations of ammonia and ammonium nitrate aerosols. Biomass burning and other pollution plumes, and their impact on ozone chemistry in the UTLS region, will be detected from observations of HCN, CO and a further wealth of volatile organic compounds. The potential to measure water vapour isotopologues will help to constrain water vapour and cloud processes and interactions at the Earth’s surface. The high-resolution measurements of temperature will provide the momentum flux, phase speed and direction of atmospheric gravity waves. CAIRT thus will provide comprehensive information on the driving of the large-scale circulation by different types of waves. Tomographic retrievals will provide temperature and trace gas profiles at a much higher horizontal resolution and coverage than achieved from space so far. Flying in loose formation with the Second Generation Meteorological Operational Satellite (MetOp-SG) will enable combined retrievals with observations by the New Generation Infrared Atmospheric Sounding Interferometer (IASI-NG) and Sentinel-5, resulting in consistent atmospheric profile information from the surface up to the lower thermosphere. Our presentation will give an overview of the proposed CAIRT mission, the science to be addressed and first results from the ongoing Phase 0 science study and campaign activities.
The proposed EE11 mission WIVERN (a WInd VElocity Radar Nephoscope) will launch a conically scanning 94GHz Doppler radar in a 500 km orbit with a sample swath width of 800km and should provide global in-cloud winds line-of-sight winds from the Doppler shift of the radar returns and estimates of cloud properties and precipitation from the reflectivity both with a vertical resolution of about 650m. The successful Aeolus mission has demonstrated that assimilating Doppler line-of-sight lidar winds from the clear sky and from cloud tops leads to a significant reduction in forecast error, so we envisage that assimilating the in-cloud winds from WIVERN would lead to further improvement in the forecasts. The WIVERN radar footprint on the ground is of size about 1km2 and if the antenna scans with a period of 5 seconds during which time the satellite moves 35km along track, then the radar footprint will trace out a cycloid on the surface of the Earth and visit every box of size 30 by 30km on the Earth’s surface at latitudes below 80deg on average once a day. These winds would be representative of the large-scale flow and be suitable for data assimilation. Changes of line-of-sight winds on the km scale should provide a statistical measure of the vertical convective motions that should be useful for validating the representation of convection in NWP and climate models. The global profiles of reflectivity should provide information on ice water content and precipitation rates.
Doppler observations from a moving satellite are challenging. The Doppler shift is usually detected by the “pulse-pair” technique, that is to say the phase change in the radar return from two successive transmitted pulses. A 94GHz frequency is necessary to have a 1km sized footprint on the ground. The wavelength is only 3.2mm, so a phase change of +/- 180degs will be detected if the particle moves 800um in the time between the two pulses, so to detect a maximum unambiguous line-of sight velocity of 40m/s, the pulse separation must be just 20us, or only 3km in range. To avoid the problem of identifying the returns from two pulses with 3km separation, we adopt the “polarisation diversity pulse pair” (PDPP) approach whereby one pulse is H polarised and the other V polarised. The pulses are effectively “labelled” H and V and, provided there is no cross talk between the H and V pulses on transmission, reflection and reception , the high line-of-sight winds encountered in the atmosphere can be reliably measured. For the highest wind speeds there will be a single fold, but experience with ground-based radars is that there are reliable techniques to unfold a single fold. The PDPP technique with 94GHz radars has been implemented on an aircraft and on the ground. Extensive observations with these two systems confirm the accuracy of the technique. This performance has also been confirmed with a comprehensive end-to-end WIVERN simulator that has been developed.
Using the well- established Doppler theory and its validation using wind observations with the new PDPP technique, we can predict that, for WIVERN, the precision of the wind should be better than 2 m/s for reflectivities about -20dBZ and the 20km horizontal integration needed to acquire winds representative of the large scale flow. Analysis of the global climatology of reflectivity derived from CloudSat indicates that WIVERN should acquire over one million line-of-sight winds per day.
Small-scale (below 10km) dynamical features at the surface of the ocean are ubiquitous and play an important role in vertical exchanges of heat, gases and freshwater between the surface layer and the ocean interior, horizontal transport and dispersion pathways and interactions between the atmosphere, ocean, land and cryosphere. By modulating the transport of nutrients, they also impact significantly on marine biogeochemistry, the growth of phytoplankton and the marine food chain. The characteristic swirls, filaments and eddies are seen frequently in high resolution images of sea surface temperature and ocean colour. However, direct measurements of dynamics at such short scales (known as ocean submesoscales) are very scarce, and no existing satellite can measure these scales with the resolution and precision that would be needed for their full characterisation.
Synoptic 2D observations of dynamics at these scales are needed to improve knowledge of the relevant ocean surface processes and benefit ocean and climate modelling. Numerical models predict that ocean dynamics change dramatically around 1-10km scales, with enhanced air-sea coupling through wind/wave/current interactions and enhanced upper ocean mixing & vertical transport. Observations of submesoscale dynamics would improve the representation of these phenomena in models and improve forecasts and projections at multiple spatial and temporal scales, including basin-wide and climate scales. The need for new high-resolution observations is particularly critical for total surface current vectors and wind vectors in coastal and shelf seas and marginal ice zones (MIZs). New satellite data are urgently needed to validate and improve numerical models of ocean and atmospheric circulation, ocean waves and sea ice that currently ignore or misrepresent these small-scale phenomena.
SEASTAR is an Earth Explorer 11 candidate mission that addresses the scientific need for observing small-scale ocean dynamics by proposing to measure, for the first time, 2-D fields of total surface current vectors and wind vectors at 1 km resolution over a wide swath with high accuracy. A key objective of SEASTAR is to characterise, for the first time, the magnitude, spatial characteristics, regional extent and temporal variability on daily, seasonal to multi-annual time scales, over all coastal seas, shelf seas and MIZs. As such SEASTAR should allow to investigate the relations between small-scale dynamics, air-sea interactions, vertical processes and marine productivity, using synergy with high-resolution satellite data from optical, thermal and microwave sensors. Most importantly, new observations from SEASTAR would enable the validation of high-resolution and coupled models and support the development of new parameterisations to improve operational forecasts and reduce uncertainties in climate projections.
SEASTAR is currently being investigated under EE11 Phase 0 science and industrial activities. SEASTAR is a three-beam along-track SAR interferometer that observes the motion of the ocean surface by exploiting the doppler shift between two SAR images of the same scene taken within a short time-lag. SEASTAR has a three-beam configuration that includes two pairs of interferometric antennas looking 45° forward and 45° backward of broadside, and a dual-polarisation broadside beam that provides a third look in azimuth.
The presentation will outline the key elements of the mission and the latest status of the mission concept evolution, with the technical solutions and trade-offs that are being considered. The talk will also report on ESA-funded science activities to consolidate user requirements and the current assessment of the Scientific Readiness Level of the mission. Other planned activities, including flight campaigns with the OSCAR airborne demonstrator and numerical end-to-end simulator developments, are expected to provide a comprehensive assessment of the suitability and benefits of the mission in preparation for the selection by ACEO and ESA in late 2023 of two of the four EE11 candidates to enter Phase A.
The Carbon Strategy Report of the Committee on Earth Observation Satellites (CEOS, 2014) identified a number of pools and fluxes of carbon in the ocean that are amenable to remote sensing. In ESA’s Biological Pump and Carbon Exchange Processes (BICEP) project, we have been investigating satellite methods to map marine primary production, phytoplankton carbon, particulate organic carbon, particulate inorganic matter and dissolved organic carbon. Time series of each of these products at 9 km, monthly resolution is being generated. The main input to the calculations is the ocean-colour fields generated by the Ocean Colour Climate Change Initiative (OC-CCI). These are supplemented by fields of photosynthetically available radiation at the surface of the ocean, sea-surface temperature, and sea-surface salinity (from CCI). For most of the products, the time series extends from 1998 to 2020, unless limited by availability of input data.
The primary production computations (Kulk et al. 2020, 2021) rely on an extensive in situ database of photosynthesis-irradiance parameters. The same parameter set is used, along with a photo-acclimation model, to compute phytoplankton carbon, ensuring that the allocation of resource (light) between production of carbon and chlorophyll is treated in an internally consistent manner. Various algorithms available for calculation of particulate organic carbon have been compared, before selecting one of the better-performing algorithms for generation of the time series. The algorithm for mapping dissolved organic matter is a novel one, that makes use of machine-learning tools.
In situ data bases have been created for validation and comparison of products, and for generation of uncertainty estimates. Algorithms for estimation of biological export production have also been implemented. The next major activity in the project is user consultation, to which end an online workshop on “Ocean Carbon from Space” is being organised on 14-18 February, 2022, with international collaboration and participation.
Reference:
CEOS (2014) CEOS Strategy for Carbon Observations from Space. The Committee on Earth Observation Satellites (CEOS) Response to the Group on Earth Observations (GEO) Carbon Strategy. Issue date: September 30 2014. Printed in Japan by JAXA and I&A Corporation
Kulk G, Platt T, Dingle J, Jackson T, Jönsson B, Bouman HA, Babin M, Doblin M, Estrada M, Figueiras FG, Furuya K, González N, Gudfinnsson HG, Gudmundsson K, Huang B, Isada T, Kovac Z, Lutz VA, Marañón E, Raman M, Richardson K, Rozema PD, Van de Poll WH, Segura V, Tilstone GH, Uitz J, van Dongen-Vogels V, Yoshikawa T, Sathyendranath S (2020). Primary production, an index of climate change in the ocean: Satellite-based estimates over two decades. Remote Sensing 12:826; doi:10.3390/rs12050826
Kulk G, Platt T, Dingle J, Jackson T, Jönsson B, Bouman HA, Babin M, Doblin M, Estrada M, Figueiras FG, Furuya K, González N, Gudfinnsson HG, Gudmundsson K, Huang B, Isada T, Kovac Z, Lutz VA, Marañón E, Raman M, Richardson K, Rozema PD, Van de Poll WH, Segura V, Tilstone GH, Uitz J, van Dongen-Vogels V, Yoshikawa T, Sathyendranath S (2021). Correction: Kulk et al. Primary Production, an Index of Climate Change in the Ocean: Satellite-Based Estimates over Two Decades. Remote Sensing 13:3462; doi:10.3390/rs13173462
While satellite remote sensing has revolutionized our understanding of global marine systems by providing synoptic and repeated global observations of phytoplankton stocks and rates of primary production, our present ability to quantify the export and fate of ocean net primary production (NPP) from satellite observations or to predict future fates using Earth system models is limited. To address this shortcoming, NASA, in partnership with the National Science Foundation, funded the Export Processes in the Ocean from RemoTe Sensing (EXPORTS) field campaign. By linking the state of the surface ecosystem, which is quantifiable through satellite remote sensing, with the fates of ocean NPP, EXPORTS has the objective to develop a predictive understanding of the export and fate of global ocean primary production and its implications for the Earth’s carbon cycle in present and future climates.
EXPORTS successfully completed two comprehensive field campaigns targeting two ecological end members: North Pacific Ocean (2018), which represented a low energy, biogeochemically homogeneous ecosystem, and the North Atlantic Ocean (2021), which represented a high energy system with high ecosystem complexity. Both field campaigns involved global and ocean class research vessels, and numerous autonomous platforms operating in a tightly coordinated fashion over a period of several months. Sampling of ecological and biogeochemical stocks, rates, and fluxes, was coupled with analyses of field and remote sensing data in near real time where possible to allow for adaptive sampling, targeting critical export processes, and unique opportunities.
This presentation will include early results from EXPORTS specifically as it seeks to answer its goals to characterize the oceanic carbon cycling processes and developing a predictive understanding of the fates of global NPP and their roles in the carbon cycle.
The flux of carbon dioxide (CO2) between the ocean and atmosphere is a crucial component of the carbon cycle and global climate. Satellite observations offer great potential for understanding and estimating air/sea CO2 fluxes over large spatial scales. The air/sea CO2 flux is often estimated from the concentration difference between air and sea (ΔC, strongly influenced by sea surface temperature and the cool skin effect), and the gas transfer velocity (K, typically parameterized with wind speed). Direct measurements of the in situ air/sea flux of CO2 by the eddy covariance technique can be coupled with measurements of ΔC to: i) estimate K and improve its parameterization with satellite-retrieved data products; and ii) provide an independent validation of satellite estimates of the air/sea CO2 flux.
Funding from the European Space Agency and the UK Natural Environment Research Council has enabled collection of an unprecedented dataset of CO2 flux observations from UK research ships. Measurements have been made in the Southern Ocean, the Arctic Ocean and along extensive North-South transects in the Atlantic Ocean. A comprehensive assessment of the uncertainties in the eddy covariance flux observations has demonstrated that the technique is accurate in the mean and has identified that precise flux estimates (signal:noise ratio >3) can be obtained when data are averaged for between 1 and 3 hours. The optimal averaging time is a function of the ΔC, with shorter averaging times needed when the ΔC is larger.
The database of CO2 fluxes and air-sea concentration differences have been used to investigate the CO2-specific gas transfer velocity in a range of environmental conditions. The CO2 fluxes and gas transfer velocities have helped to illuminate the potential controls on air/sea gas transfer, including the possible role of surfactants in affecting gas transfer across the air-sea interface in the Southern Ocean. Data from the Atlantic Ocean suggest that radar scattering by waves may present an improvement compared to wind speed-based parameterizations of gas transfer. The eddy covariance flux data have also been used to identify the potential for bias in flux estimates due to near surface stratification in the Arctic, which is not captured when seawater concentrations are measured ~6 m below the sea surface. Near surface stratification is also potentially important in tropical environments, which often experience strong solar insolation and light winds.
Finally, the potential for direct eddy covariance observations being used to validate satellite-based air/sea CO2 flux estimates will be discussed, along with plans to develop future air/sea CO2 flux systems and Fiducial Reference Material (FRM).
The North Brazil Current (NBC) flows northward across the Equator, passes the mouth of the Amazon River, and forms in its retroflection near 8°N large oceanic eddies, called North Brazil Current rings. In this work, we take advantage of an unprecedented set of in-situ observations in combination with satellite-based measurements to investigate the processes that drive the variability of the air-sea CO2 fluxes in the western tropical Atlantic Ocean. The in-situ data originates from three research ships operating in winter 2020 during the EUREC4A-OA/ATOMIC campaign and the Tara sailing vessel operating in summer 2021 during the Microbiome campaign. Through multivariable regression, we determine predictors of fugacity of CO2 (fCO2) from salinity, temperature, and chlorophyll-a. Applying the predictors to satellite-based maps of salinity, chlorophyll-a and temperature we create high resolution fCO2 maps that nicely highlight contrasting properties in the region and seasons.
In February 2020, the area is a CO2 sink (-1.7 TgC.month-1), previously underestimated by a factor 10. The NBC rings transport saline and high fCO2 water indicative of their equatorial origins and are a small source of CO2 at regional scale. Their main impact on the variability of biogeochemical parameters is through the filaments they entrain into the open ocean. During the winter campaign, a nutrient rich freshwater plume from the Amazon River is entrained from the shelf up to 12°N and caused a phytoplankton bloom leading to a significant carbon drawdown (~20 % of the total sink). On the other hand, saltier filaments of shelf water rich in detrital material act as strong local sources of CO2. Spatial distribution of fCO2 is therefore strongly influenced by ocean dynamics south of 12°N. The less variable North Atlantic subtropical water extends from Barbados northward. They represent ~60 % of the total sink due to their lower temperature associated with winter cooling and strong winds.
In August-September 2021, the Amazon River plume influences most of the north-western tropical Atlantic Ocean. The dynamics of the plume are complex, driven by the NBC retroflection and ring formation. In response, a large variability of surface salinity and of the CO2 air-sea flux is observed. In the core of the plume, a phytoplankton bloom drives an important CO2 sink, while old remnants of the plume located to the north west present a dampened signal, influenced by both surface salinity and temperature.
Coloured dissolved organic matter (CDOM) in marine environments impacts primary production due to its absorption effect on the photosynthetically active radiation. In coastal seas, CDOM originates from terrestrial sources predominantly and causes spatial and temporal changing patterns of light absorption which should be considered in marine biogeochemical models. We propose a model approach in which Earth Observation (EO) products are used to define boundary conditions of CDOM concentrations in an ecosystem model of the Baltic Sea. CDOM concentrations in riverine water derived from EO products serve as forcing for the ecosystem model. For this reason, we introduced an explicit CDOM state variable in the ecosystem model.
Deriving a good quantitative estimate of the CDOM absorption from satellite measurements, under high CDOM concentration conditions as in Baltic Sea is a challenging task. The water is almost completely absorbing in the visible, short-wavelength bands, and overall, the signal level is very low. Atmospheric correction requires a good model of the water leaving reflectance as lower boundary condition. Thus, we carefully characterised the water using historic and own in-situ measurements and derived a new bio-optical model for the Baltic Sea water. This was used in an innovative approach combining a polymer-like approach of inverting the atmospheric signal with a C2RCC forward model for the water reflectance. This approach was applied to Sentinel 3 and Sentinel 2 data and validated with in-situ reflectance and water constituent concentration measurements.
We show that the light absorption by CDOM in the ecosystem model can be improved considerably in comparison to approaches where CDOM is estimated from salinity. The model performance increases especially with respect to spatial CDOM patterns due to the consideration of single river properties. The introduction of high-quality CDOM data with sufficiently high spatial resolution provided by the new generation of ESA satellite sensor systems (Sentinel 2 MSI and Sentinel 3 OLCI) has proven to improve the results substantially. Such data are essential, especially when local differences in riverine CDOM concentrations exist.
This work was carried out under the ESA EO Science for Society study “BALTIC+ Sea-Land biogeochemical linkages (SeaLaBio).
Oceanic subtropical gyres have a critical role in the global carbon budget due to their immense size either if characterized by oligotrophic waters. In the last decades, the North Atlantic Subtropical Gyre (NASTG) has experienced the fastest enlargement of oligotrophic waters in response to ocean warming, worldwide. Here, we study the trophic regime changes in the NASTG by using 21 years (1998-2018) of satellite chlorophyll-a (CHL) data, complemented with other variables such as sea surface temperature (SST), optical backscatter coefficient, Secchi disk and mixed layer depth (MLD). To this aim, we describe the spatial/temporal variability of the least productive waters, coupled with an inter-annual variability analysis of key environmental variables. In the last 21 years, the ultra-oligotrophic waters (settled as CHL ≤ 0.04 mg-3, differing from previous literature limits, 0.07 mg-3 or 0.1 mg-3) have expanded in space and increased in time, accounting for an area growth rate of around 96.34% and an increase of average months per year of 53.76% with respect to the beginning of the time series. This expansion and prevalence is found to be concurrent with a continuous increase of SST of more than 0.5°C in the area thus detected, but also associated with a deepening of the MLD. These observations thus point out driving factors that are more complex than the local stratification increase and vertical nutrient flux reduction, generally hypothesized. Future works in this pathway may include: (i) combined observations by satellite data and robotic autonomous platforms (e.g. Biogeochemical-Argo floats) to better understand how ocean warming impacts the trophic regime and vertical distribution of phytoplankton (e.g. biomass, physiology) using both environmental (e.g. temperature, salinity) and physical variables (e.g. horizontal advection); (ii) study the inter-annual variability of net primary production and carbon fluxes to quantify the impact of the observed desertification on the ocean biogeochemistry (i.e., biological carbon pump).
Current state-of-the-art estimates of emissions from vegetation fires were mainly derived from burned area datasets from medium-resolution satellite sensors and those burned area datasets under-detect small fires. Hence, fire carbon emissions could be much larger than previously estimated. This is critical as medium-resolution burned area datasets are frequently used to evaluate, improve and calibrate global fire-enabled vegetation models. Calibrating global vegetation-fire models with biased estimates of burned area might hence result in an underestimation of the role of fires in global vegetation dynamics and the carbon cycle. Furthermore, knowledge about fire carbon emissions mostly relies on satellite observations of burned area that are combined with simulations from ecosystem models of fuel loads (biomass) and combustion. Alternative approaches to estimate fire emissions make use of observations of fire radiative power or fire radiative energy as a proxy for fire emissions. However, both approaches make little use of information about fire type, or aspects of fire behaviour related to smouldering and flaming combustion.
These limitations in the current data basis on fire emissions, demonstrates the need to explore the information from the Sentinels: The Sentinel-5p TROPOspheric Monitoring Instrument provides several observations related to fire emissions such as the absorbing aerosol index, aerosol layer height, nitrogen dioxide (NO2), carbon monoxide (CO) and formaldehyde. The Sentinel-3 Sea and Land Surface Temperature Radiometer allows the mapping of active fires and fire radiative power. The Sentinel-3 Ocean and Land Colour Instrument allows the mapping of fire-induced land cover changes (e.g. burned area, fire severity) at medium resolution and to retrieve pre- and post-fire vegetation properties such as leaf area index or fractional vegetation cover. The Sentinel-2 Multispectral Instrument allows the mapping of fire-induced land cover changes at a higher spatial resolution (10-20 m). The Sentinel-1 C-band Synthetic Aperture Radar allows for the estimation of surface soil moisture and can serve as a proxy for the moisture content of surface fuels. Based on the complementary information from the Sentinels, we are currently developing a series of products to better characterise fuel conditions, fire behaviour and fire emissions at a high spatial resolution and for individual fires.
Vegetation fuel loads and combustion completeness are estimated using a novel fuel data integration framework. Therein we combine surface reflectance and vegetation information from Sentinel-3, Sentinel-2 and Proba-V; land cover and above-ground biomass from the ESA Climate Change Initiative and from the Copernicus Land Service; vegetation optical depth; and soil moisture from Sentinel-1 and Metop/ASCAT. The approach uses an empirical allometry model to estimate the fuel loads of different biomass compartments of trees and herbaceous vegetation by using total above-ground biomass and leaf area index as input. Additionally, surface fuels are estimated by combining land cover, leaf area index and above-ground biomass with databases of ground observations. This information is used to provide information about fuel loads in bottom-up estimates of fire emissions.
Fire behaviour and burned area are quantified using a novel mapping of individual fires based on thermal anomalies and the diurnal fire cycle from Sentinel-3 and of burned area estimates from Sentinel-2. Additionally, the morning (10 am) and evening (10 pm) overpasses from Sentinel-3/SLSTR are combined with mid-afternoon (1:30 pm) and night-time (1:30 am) overpasses from VIIRS to track individual wildfires as they evolve. The resulting maps track fire behaviour, type and size and will enable direct estimates of fire emissions based on estimates of fuel loads and combustion completeness from a combination of modelling and top-down constraints.
Fire effects on atmospheric composition are quantified by contrasting observations of trace gases (CO, NO2, formaldehyde) and aerosols from Sentinel-5p with model results from the Copernicus Atmosphere Monitoring Service (CAMS) modelling system. A model for atmospheric composition is necessary to be able to evaluate estimated emissions against satellite observations of atmospheric composition, i.e. to provide top-down constraints on fire emission estimates. We compare aerosol plume altitude against retrievals of aerosol layer height, to constrain plume dynamics, as well as observed and modelled CO and NO2 to constrain their emissions. The analysis helps to quantify the magnitude and uncertainties from top-down fire emission estimates, and will lead to improvements in the parametrisation of the model fire plume dynamics.
The integration of these developments based on Sentinel-1, -2, -3 and -5p will enable us to estimate fire emissions at high spatial resolution and in the long-term to provide estimates of emissions from individual fires. This information will be used in the future to constrain global fire models and hence to advance the understanding of the role of fires in the global carbon cycle.
We acknowledge the European Space Agency for funding the Sense4Fire (sense4fire.eu) project.
We report on the development and application of the new DALEC & BETHY (D&B) model that is performed within ESA's Land surface Carbon Constellation (https://lcc.inversion-lab.com) study. This new community model is designed to simulate a range of satellite and in-situ observations through dedicated observation operators. A suite of observation operators allows the simulation of solar induced fluorescence, Fraction of Absorbed Photosynthetically Active Radiation, Vegetation Optical Depth from active and passive sensors, and surface layer soil moisture – and thus the assimilation of those data streams. To our knowledge, the D&B assimilation framework will be the first to combine such a large and diverse array of observational constraints moving beyond site scale to regional applications. D&B builds on the strengths of each component model in that it combines the dynamic simulation of the carbon pools and canopy phenology of DALEC with the dynamic simulation of water pools, and the canopy model of photosynthesis and energy balance of BETHY. The model uses an hourly time step, except for the water balance, which (currently) is simulated at a daily time step. We present an evaluation of the model performance against a range of in-situ observations at two well-instrumented sites at which field campaigns are carried out: (1) Sodankylä, Finland, located in a boreal evergreen needleleaved forest biome and (2) Majadas de Tietar, Spain, located in a temperate savanna biome. The model performance will also be assessed against a range of satellite observations for approximately 500 km x 500 km regions around each site. The model is embedded into a variational assimilation system that adjusts a combination of initial pool sizes and process parameters to match the observational data streams. For this purpose the D&B assimilation system is provided with efficient tangent and adjoint code. We will show initial data assimilation experiments at site scale.
At the canopy level, gross primary productivity (GPP) represents the major global carbon uptake from the atmosphere. GPP can be derived at site level by partitioning direct observations of net ecosystem exchange by eddy covariance. For a global picture these sparse measurements can be upscaled with machine learning techniques, to give globally dense estimates. However these estimates hinge on having the right predictors available. While vegetation indices such as NDVI or EVI are valuable predictors, providing an indication of the overall vegetation greenness, Solar Induced Fluorescence (SIF) has been shown to relate better to photosynthetic activity. However, time series of SIF are still very short, their spatial resolution is generally much coarser than desired, and the signal-to-noise ratio is lower than that of reflectance-based indices. With the Copernicus program and the fleet of Sentinel satellites, there is a possibility to improve the last two. Together, Sentinel-2 (MSI), Sentinel-3 (both OLCI and SLSTR), and Sentinel-5P (TROPOMI), measure a variety of indices at complementary spatial, temporal, and spectral resolutions. While TROPOMI estimates SIF, that is directly linked to photosynthesis at daily quasi-global coverage, Sentinel-2 and Sentinel-3 instruments have much higher spatial resolutions, but coarser temporal resolutions and provide information about heat stress, greenness, chlorophyll concentration and landscape heterogeneity. The objective of this work is to synergistically capitalize on these complementary characteristics to yield enhanced GPP estimates at 1 km spatial resolution. Our derived high resolution SIF is benchmarked against flux tower GPP across selected areas with diverse ecosystems that are within the Sen4GPP project. We aggregate Sentinel-2 and downscale TROPOSIF to a common grid of 1km for Sentinel 3 SLSTR. By aggregating Sentinel-2 we derive a heterogeneity index from the high resolution information. For downscaling we explore a variety of methods ranging from statistical (including machine learning) to more explainable approaches based on process understanding. Distributing the coarse resolution SIF signal depends on investigating and understanding the dependencies and sensitivities of SIF to environmental and remote sensing indices (that come generally from Sentinel-2 and Sentinel-3). This allows us to present a spatial resolved GPP estimation at 1 km scale across Europe that is based on the synergistic use of the Sentinel (2, 3, 5P) satellite measurements.
Primary Production is the main driving factor of the terrestrial carbon cycle. Knowing how much atmospheric carbon dioxide is converted into biomass is critical information for scientists and policy makers. In this context, Earth Observation (EO) is an essential technology to quantify Gross and Net Primary Production (GPP & NPP). While EO based primary production quantification has a multi-decadal history, the scientific and technical advances that can support EO based GPP and NPP have been strongly progressed in latest years. Enhanced inputs are available, pre-processing techniques are improved, new generation state-of-the-art GPP models are emerging, in-situ data is becoming operationally available and cloud processing platforms are capable to deal with large datasets.
Available operational GPP/NPP products are currently focused on medium resolution grids (e.g. Copernicus DMP at 300m and MODIS GPP/NPP at 250m). There is however a clear demand for high resolution GPP/NPP at larger scales, for example the support LULUCF (Land Use, Land-Use Change and Forestry) reporting. With the launch of the High Resolution Vegetation Phenology and Productivity (HR-VPP) service, Sentinel 2 fAPAR data (a critical input for Light Use Efficiency (LUE) GPP models) is now operationally available over the European continent. This opens up the possibility to explore and prototype high resolution GPP/NPP products at larger scales. Within this context, the operational Copernicus DMP model has been applied on HR-VPP data at site level and test products generated at tile basis. The accuracy of the products have been assessed using the in-situ data available through the ICOS network. This work highlighted a number of insights in high resolution GPP/NPP modelling, including the requirements for enhanced pre-processing techniques, ancillary data requirements, model structure and in-situ data required. The lessons learned from this work will be presented and complemented with technical and scientific conclusions from the TerrA-P project, where Sentinel 3 data was exploited for GPP/NPP modelling at European to global scale.
The integration of global land surface remote sensing and in-situ measured ecosystem carbon fluxes
through machine learning approaches offers a unique data-driven perspective to diagnose the carbon cycle. Different Earth Observation (EO) data sets contain specific information on structural and/ or physiological vegetation conditions, or on the status of land surface e.g. in terms of moisture conditions. Every single EO product alone addresses only individual aspects of the complex system, and can be confounded by other factors. The synergistic combination of complementary EO products therefore offers the greatest promise for improvements in our data-driven modelling capacities of land surface productivity. We use the new implementation of the statistical flux upscaling framework Fluxcom (Tramontana et al. 2016, Jung et al. 2020) and tailored cross-validation experiments to analyse the the individual and synergistic contributions of different EO data sets to site-level prediction accuracy for terrestrial carbon fluxes. Each of the EO predictor variables receives a dedicated and careful preprocessing in terms of quality checks and gap-filling. Meteorological observations from the sites can be included as additional predictor variables in the experiments. Next to their overall importance for prediction skill, we are interested in understanding the impacts of the EO data sets on different scales of carbon flux variability (e.g. diurnal, seasonal, seasonal anomalies, inter-annual, and between sites) and to what extent differences in acquisition properties play a role for the model estimates.
First results for the example of MODIS LST_cci indicate that it strongly contributes to prediction accuracy of gross primary productivity (GPP) for all time scales, but this is particularly the case for inter-annual variations. The contribution of MODIS LST is even slightly larger than the one of site-level air temperature, with the remarkable exception of GPP anomalies, where the prediction accuracy is low without meteorological information. In times of dry anomalies, the model strongly profits from LST as a surrogate for moisture availability. Conversely, in models without meteorological information, LST mostly acts as a proxy for light availability and improves GPP accuracy for wet anomalies as well. Regarding the impact of acquisition properties of MODIS we find that the variability in viewing geometry and overpass time does not affect predicted site-level GPP. However, thermal measurements as done by MODIS can only inform us about land surface conditions in the absence of clouds, which generates a bias towards clear-sky conditions. Failing to account for this bias in availability of MODIS LST will result in 50% higher predicted GPP values and an increase in relative prediction error to more than 100% for overcast days.
We will also present results on the individual and synergistic added values of other EO data sets, such as land surface temperature from geostationary satellites (LST_cci product from the Seviri instruments), SIF from the GOME2 and Tropomi instruments, and SMOS VOD and soil moisture for data-driven estimates of carbon fluxes – overall and under water stress - which are the focus of ongoing work in the ESA Living Planet Project ‘Vad3e mecum’ (‘Vegetation and drought: towards improved data-driven estimates of ecosystem carbon fluxes under moisture stress’).
The lessons learned from the site-level cross-validation experiments will guide the production of more accurate gridded estimates of gross and net carbon fluxes for Europe and the globe within Vad3e mecum. Those are of great relevance to increase our process understanding of terrestrial productivity and will contribute to improved characterisation of biogenic fluxes, e.g. in atmospheric inversions.
Terrestrial ecosystems have absorbed more than one-third of cumulative anthropogenic emissions during the past decades, mitigating global warming (Friedlingstein et al., 2020). Earlier studies suggest that an increase in photosynthesis in response to elevated atmospheric CO2 concentration (eCO2) has the potential to increase the strength of the terrestrial carbon sink (that is, CO2 fertilization), providing a negative feedback on the growth of atmospheric CO2 concentration (Schimel et al., 2015; Sitch et al., 2015). Such CO2 fertilization effects have been reported at forest inventory plots and in short-term CO2 spring experiments (Pretzsch et al., 2014; Hubau et al., 2020; Terrer et al., 2021). However, it remains unclear whether and to what extent rising CO2 concentration influences vegetation carbon stocks and its long-term changes at the global scale. Here, we used two sets of newly satellite-based above-ground biomass (AGB) products, i.e., BIOMASCAT (https://eo4society.esa.int/projects/biomascat/) and Xu et al. (2021), and gross primary productivity (GPP) measurements from 89 long-term FLUXNET sites, and isolated the CO2 fertilization effects on AGB and GPP from the effects of concurrent anthropogenic climate change, land-use change, and nitrogen decomposition over the period 2000-2019 using multiple linear regression. The observation-based independent AGB sensitivity and site-scale GPP sensitivity to eCO2 were then used to constrain the modelled global AGB sensitivity to eCO2 from 13 dynamical global vegetation models (DGVMs) using an emergent-constraints approach. Our constrained estimates from two satellite AGB products and FLUXNET GPP measurements show convergent results, that is, the magnitude of global CO2 effects on AGB is 5.5 – 6.6% on average (ranging from 1.7 to 9.5%). This value suggests a substantial CO2 fertilization effect on global AGB changes, but that is around 20% lower than the modelled ensemble means of 7.7% from DGVMs and 8.1% from Earth System Models (ESMs) ensembles. A direct implication is that CO2 fertilization alone could reduce atmospheric carbon by 1.9 – 2.26 PgC yr-1 per hundred ppm of eCO2 (ranging from 0.6 to 3.2 PgC yr-1 [100ppm]-1), ignoring climate change effects and vegetation adaptation. Although relying on a simple statistical approach we provide a robust estimate of the carbon dioxide removal by the terrestrial vegetation that can partially offset the increased carbon emissions from fossil fuel emissions. These results emphasize the role of legacy Earth Observations in constraining global carbon cycle diagnostics, contributing to understand and predict potential climate mitigation from vegetation biophysical feedbacks.
The primary driver of climate change is the increase of CO2 in the atmosphere. The importance of biomass in climate and the carbon cycle arises because around 50% of biomass consists of carbon. Hence destruction of biomass due to deforestation and forest degradation leads to carbon emissions to the atmosphere, while uptake of CO2 due to forests growth removes CO2 from the atmosphere. Hence biomass plays a key role in both climate warming and its mitigation. This is why it is recognized as an Essential Climate Variable in the Global Climate Observing System. The most recent estimate of the global carbon budget (Friedlingstein et al., 2020) indicates that the average Land Use Change (LUC) CO2 flux to the atmosphere for 2010-2019 was 5.7 ± 2.6 GtCO2 y-1 (this is a net value including both loss of forests and forest regrowth and should be compared with the fossil fuel emissions of 34.4 ± 21.7 GtCO2 y-1) while the terrestrial CO2 sink (enhanced generation of biomass due to CO2 fertilisation and climate warming) for the same period is 12.5 ± 2.2 GtCO2 y-1. Note that (a) the latter value is model-based; (b) there is not a clear demarcation between the forest growth terms contained in the LUC flux and the terrestrial sink; (c) there are large uncertainties (and very large relative uncertainties) in the land terms. Hence accurate measurements of biomass and its changes are fundamental in quantifying the carbon cycle and reducing these uncertainties. This is the primary objective of the BIOMASS mission. Current estimates of emissions due to deforestation rely on separate estimates of forest change (activity data) and biomass and typically emissions are calculated as the product deforested area x biomass x emission factor, where the emission factor describes the fraction of biomass carbon that is converted to CO2 emissions. The biomass term will usually be some average value based on available ground data, but this may not properly represent the biomass of the cleared area. Instead, BIOMASS will be able to measure the both deforestation activity itself and the biomass where this occurred, thus removing potential biases in the emission estimates. Less clear is the extent to which BIOMASS will be able to measure emissions due to forest degradation (which depends on the sensitivity of the signal to biomass) and forest growth. Although the rate of increase of scattering from the forest canopy as biomass increases is expected to be greatest when forests are young, the high penetration at the P-band wavelength means that the signal is then likely to be strongly affected by soil scattering. Separating out the biomass signal may therefore be difficult, but the availability of full polarimetry on all measurements may help with this.
As well as direct estimates of biomass change, biomass as a static variable gives information on carbon dynamics when it can be combined with productivity information. This is because for a forest system in equilibrium the residence time of a carbon atom in the system is given by B/NPP, where B is the mean biomass and NPP is the net primary production, i.e., the rate of production of biomass. Uncertainty in residence time dominates the uncertainty in terrestrial vegetation responses to future climate and atmospheric CO2 (Friend et al., 2014). Since NPP can be estimated from optical remote sensing measurements and models, estimates of biomass thus allow access to residence time (though note that this needs consideration of both above- and below-ground biomass pools). This relation has been exploited in several studies, using both models and observations, but a key issue in observationally deduced residence time is the well-known saturation of current sensors at higher levels of biomass, which will translate directly into errors in residence time. The BIOMASS mission is specifically designed to minimize such saturation. More generally, use of biomass information and time series of biomass in model data assimilation schemes has been shown to provide information on a range of key parameters controlling vegetation systems (Yang et al., 2021; Smallman et al. 2017).
More generally, BIOMASS will provide a key element of a global system to measure forest structure and biomass. Other important missions in this overall capability are GEDI and NISAR, which are dedicated to forest observations (though NISAR also has other science goals), together with Sentinel-1, IceSat-2 and the JAXA series of L-band SAR missions. These missions are intended not just to meet the needs of the global climate and carbon cycle modelling community, but also those of nations reporting to the UNFCCC for the Global Stocktake as part of the Paris Agreement. Considerable extra value will be added to these missions by the collaborative NASA/ESA Multi-Mission Algorithm and Analysis Platform, which allows free access and joint analysis to the data from all three missions. This will not only make it possible to produce optimized estimates of biomass by combining the strengths of the three missions, but will also greatly improve the usability of the data by providing embedded processing capabilities that remove the need for countries to have very powerful in-house computing facilities.
References
Friedlingstein P, O’Sullivan M, Jones MW, et al. (2020). Global Carbon Budget 2020, Earth Syst. Sci. Data, 12, 3269–3340, 2020 https://doi.org/10.5194/essd-12-3269-2020
Friend AD, Lucht W, Rademacher TT, et al. (2014). Carbon residence time dominates uncertainty in terrestrial vegetation responses to future climate and atmospheric CO2, PNAS, ww.pnas.org/cgi/doi/10.1073/pnas.1222477110
Smallman L , Exbrayat J-F, Mencuccini M, et al. (2017). Assimilation of repeated woody biomass observations constrains decadal ecosystem carbon cycle uncertainty in aggrading forests, J. geophys. Res.: Biogeosciences, doi: 0.1002/2016JG003520
Yang H, P. Ciais P, Wang Y, et al. (2021). Variations of carbon allocation and turnover time across tropical forests, Global Ecology and Biogeography, doi: 10.1111/geb.13302
The Biomass space segment for the ESA Biomass mission is under development by an industrial consortium led by Airbus Defence and Space Ltd. The project is in its final development phase. It has passed its Critical Design Review in summer 2021 which declared the adequacy of the system design and released the satellite assembly and verification phase.
The ground segment is based on extensive heritage from previous ESA Earth Explorer missions and its development is progressing nominally. A Vega rocket has been procured to launch the satellite; the launch is currently planned for the end of 2023.
The presentation will provide an overview of the elements of the Biomass system as described above and will give an up to date status of its development. The scientific aspects including Level 2 data processing are being dealt with elsewhere in this session.
Scheduled for launch in 2023, ESA’s seventh Earth Explorer Mission, BIOMASS, will carry the first P-band synthetic aperture radar (SAR) to be flown in space, to gather fully polarimetric acquisitions over forested areas worldwide in interferometric and tomographic modes. The system has been designed to produce consistent global maps of the Earth’s forests during a nominal five-year lifetime.
The primary objective of BIOMASS is to determine the worldwide distribution of forest above-ground biomass (AGB) and its change with time [1],[2],[3]. To fulfil this objective, BIOMASS will carry a fully polarimetric P-band SAR operating at a center frequency of 435 MHz and with a bandwidth of 6 MHz. This is the lowest possible frequency for a satellite SAR which fulfills ITU regulations and avoids the deleterious ionospheric disturbances at lower frequencies. The main reasons for choosing the lowest possible operating frequency are: 1) to increase the penetration in all forest biomes, 2) to enhance the interaction with the larger woody vegetation elements, which improves the sensitivity to AGB, and 3) to increase the temporal coherence and enable repeat-pass interferometry and tomography.
Interferometry and tomography are considered essential for the mission since they add a vertical dimension to the SAR measurements and enable 3D forest measurements. Most importantly, they also enable suppressing the radar backscattering originating from the ground level, which is known to reduce sensitivity and cause errors in AGB estimates [4], [5].
The estimation techniques should also consider the temporal sampling pattern. The BIOMASS orbit has two main phases, i.e., a tomographic phase, which follows directly after the commissioning phase, and an interferometric phase during the remaining five-year mission. Data will be collected, in both phases, in a near-repeat orbit with a three-day cycle to maximize temporal coherence. Seven near-repeats will make up a tomographic stack, which will take 18 days to complete, whereas three near-repeat orbits will create an interferometric stack in 6 days. Coverage will then be built up successively, with the successive tomographic or interferometric stacks adding coverage to adjacent areas. Complete coverage is obtained in approximately 14-16 and 7-9 months in the tomographic and interferometric phases, respectively. This means that significant environmental changes will occur due to the orbit characteristics, which the estimation techniques must be designed to handle.
This paper presents methods and algorithms developed to estimate biophysical parameters from BIOMASS measurements and their implementation in the BIOMASS level 2 (L2) prototype processor. The L2 processor will generate global maps of forest Above Ground Biomass (AGB), Forest Height (FH), Forest disturbance (FD). Accurate generation of these products requires the L2 processor to be closely inter-linked with the BIOMASS interferometric processor [6], in order to produce phase-calibrated interferometric stacks, retrieve sub-canopy terrain topography, and generate a 3D representation of forest structure by use of SAR tomography.
AGB estimation results will be shown using BIOMASS-like acquisitions derived from campaign data acquired over six tropical forests in South America and Equatorial Africa. The algorithm is observed to be capable of achieving a relative RMSD of 20% with respect to in situ data using only two “good” calibration points where reference AGB is available, although retrieval accuracy was observed to depend significantly on the quality of the available calibration points. For this reason, the recommendation is made that the global AGB estimation scheme for BIOMASS relies on calibration and validation with AGB estimates from in situ inventories, which are assumed to be less prone to systematic errors. The AGB estimation performance is observed to depend on the AGB range and degrades when ground topography is significant. Good performance is achieved when the AGB interval is large (> 400 t/ha) and the average is in the interval 200–250 t/ha. [5]
[1] ESA. “BIOMASS—Report for Mission Selection—An Earth Explorer to Observe Forest Biomass”; SP-1324/1; European Space Agency: Noordwijk, The Netherlands, 2012.
[2] T. Le Toan, S. Quegan, M.W.J. Davidson, H. Balzter, P. Phaillou, K. Papathanassiou, S. Plummer, F. Rocca, S. Saatchi, H. Shugart, L. Ulander, “The BIOMASS mission: Mapping global forest biomass to better understand the terrestrial carbon cycle,” Remote Sensing of Environment, vol. 115, pp. 2850-2860, Jun. 2011
[3] Shaun Quegan, Thuy Le Toan, Jerome Chave, Jorgen Dall, Jean-François Exbrayat, Dinh Ho Tong Minh, Mark Lomas, Mauro Mariotti D'Alessandro, Philippe Paillou, Kostas Papathanassiou, Fabio Rocca, Sassan Saatchi, Klaus Scipal, Hank Shugart, T. Luke Smallman, Maciej J. Soja, Stefano Tebaldini, Lars Ulander, Ludovic Villard, Mathew Williams “The European Space Agency BIOMASS mission: Measuring forest above-ground biomass from space” Remote Sensing of Environment, Volume 227, 2019, Pages 44-60, ISSN 0034-4257,
[4] Soja, M., Quegan, S., Mariotti d’Alessandro, M. Banda, F. Scipal, K., Tebaldini, S. Ulander, L.M.H. “Mapping above-ground biomass in tropical forests with ground-cancelled P-band SAR and limited reference data”, Remote Sens. Environ. Volume 253, February 2021, 112153
[5] M. Mariotti d’Alessandro, S. Tebaldini, S. Quegan, M. J. Soja, L. M. H. Ulander and K. Scipal, "Interferometric Ground Cancellation for Above Ground Biomass Estimation," in IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 9, pp. 6410-6419, Sept. 2020
[6] M. Pinheiro at al., “BIOMASS DEM Product Prototype Processor”, EUSAR 2021
Several potential secondary mission objectives arise from the opportunity to explore Earth for the first time with a P-band SAR system. The Biomass Secondary Objectives Assessment Study (Paillou et al., 2011) identified a variety of secondary applications and assessed whether their requirements could be accommodated within the mission specifications. In particular, three objectives are expected to benefit significantly from the long P-band wavelength, while at the same time being feasible and compatible with the Biomass mission design. The presentation will detail each science objective and provide current insights on these applications for Biomass.
1. Mapping subsurface geology
Access to freshwater resources is already a major concern: in Saharan and sub-Saharan Africa, most people do not have access to safe water supplies, and the situation is expected to get worse in the future. Geological maps are crucial for mineral and groundwater exploration, and remote sensing is an important tool in establishing such maps. However, in arid regions such as North Africa, the geology is mostly hidden under a thin layer of dry, sandy sediments. Low-frequency SAR is able to penetrate dry sediments and map the subsurface down to several metres, because of low absorption and limited volume scattering. For example, L-band SAR has proven capable of penetrating a few metres of dry, homogeneous material such as sand (McCauley et al., 1982). If the sand surface is smooth, dry and thin, the subsurface of interest will not be masked, and the measured backscatter will provide an image of the subsurface roughness and slope. This can then be turned into information that is useful for exploration and geophysical prospecting (Paillou, 2017). Aircraft campaigns have illustrated the capacity of P-band SAR to penetrate at least 4 m of dry sediment (Paillou et al., 2011). The enhanced penetration capabilities of a P-band SAR, being less sensitive to the covering sediments, will be important in groundwater exploration but will also offer a unique opportunity to reveal the hidden and still unknown past hydrological history of deserts.
2. Ice sheet applications
Large changes of the Greenland and Antarctic ice sheets have been observed over recent decades, and SAR data have shown a significant acceleration of the glacier velocities both in Greenland and in Antarctica. One way of estimating the mass balance of ice sheets is by mapping the ice velocity at a flux gate with known ice thickness. Accurate ice velocity maps are also needed when modelling the response of ice sheets to climate change. In Greenland, ice sheet velocity maps are generated on an operational basis (Solgaard et al. 2021), and the velocity fields of the Antarctic ice sheets have also been mapped (Rignot et al. 2011). The measurement accuracy, however, is 1 m/yr to 17 m/yr with the currently available data, while the histogram for the entire Antarctica peaks at 5 m/yr (Rignot et al. 2011). To achieve a high velocity sensitivity, SAR data must be acquired with a long temporal baseline, and the correlation time increases with decreasing frequency, as seen when comparing L- and C-band results (Rignot and Mouginot, 2012). P-band excels by an even longer correlation time, as deep penetration makes the radar signal interact with stable subsurface scatterers in the dry snow zone. A temporal baseline defined by BIOMASS’ 8 months global mapping cycle is not unrealistic (Dall et al. 2013). Ice applications are dependent on sufficient compensation for ionospheric scintillations, which are particularly severe at high latitudes. Without any compensation, the ionosphere is the primary error contributor at L-band, and at P-band the ionosphere may be prohibitive, because the impact of the ionospheric scintillations increases with decreasing frequency.
3. Terrain Topography under Dense Vegetation
Digital Terrain Models (DTMs) represent the elevation of the ground in the absence of vegetation, buildings and so on. These ‘bare-earth’ images are crucial in a range of applications, including ecology, forest management, water resource management, mineral exploitation, national security and scientific research. However, currently available large-scale products are more accurately described as Digital Elevation Models (DEMs) because in forested areas they differ significantly from a true DTM. At P-band, vegetation causes less attenuation, therefore Biomass can fill this major gap in our knowledge of global topography. In addition, the scattering centre of the tree-ground double bounce- signal occurs at ground level and can be isolated using polarimetry.
Over its lifetime, Biomass will produce a DTM of the terrain topography under dense vegetation, thus removing the biases in DEMs using shorter wavelengths, such as the Copernicus DEM. Biomass will also be able to exploit this new DTM for slope corrections associated with the primary objectives, allowing initial products generated with current DEMs to be reprocessed, thus refining the biomass products.
References:
McCauley J. F., G. G. Schaber, C. S. Breed, M. J. Grolier, C. V. Haynes, B. Issawi, C. Elachi, R. Blom, “Subsurface valleys and geoarchaeology of the eastern Sahara revealed by Shuttle Radar,” Science, vol. 218, pp. 1004-1020, 1982.
Paillou Ph., J. Dall, P. Dubois-Fernandez, I. Hanjsek, R. Lucas, K. Scipal, BIOMASS Secondary Objectives Assessment Study, ESA ITT AO 1-6543/10/NL/CT, 210 p., 2011.
Paillou Ph., O. Ruault du Plessis, C. Coulombeix, P. Dubois-Fernandez, S. Bacha, N. Sayah, A. Ezzine, “The TUNISAR experiment: Flying an airborne P-Band SAR over southern Tunisia to map subsurface geology and soil salinity,” PIERS 2011, Marrakesh, Marocco, march 2011.
Paillou Ph., “Mapping palaeohydrography in deserts: Contribution from space-borne imaging radar,” Water, vol. 9, no. 194, doi:10.3390/w9030194, 2017.
A. Solgaard, A. Kusk J.P. Merryman, J. Dall, K.D. Kankoff, A.P. Ahlstrøm, S.B. Andersen, M. Citterio, N.B, Karlsson, K.K. Kjeldsen, N.J. Korsgaard, S.H. Larsen, R.S. Fausto, Greenland ice velocity maps from the PROMICE project”, Earth System Science Data, Vol. 13, No. 7, pp. 3491-3512, November, 2021.
E. Rignot, J. Mourinot, B. Scheuchl, “Ice Flow of the Antarctic Ice Sheet”, Science, Vol. 333, No. 9, pp. 1427-1430, September 2011.
E. Rignot, J. Mouginot, Ice flow in Greenland for the International Polar Year 2008-2009”, Geophysical Research Letters, Vol. 39, L11501, pp. 1-7, June 2012.
J. Dall, U. Nielsen, A. Kusk, R.S.W. van de Wal, “Ice flow mapping with P-band SAR”, Proceedings of the IEEE 2013 International Geoscience and Remote Sensing Symposium, 4 p., Melbourne, July 2013.
BIOMASS is the ESA 7th Explorer Mission. It features for the first time ever a spaceborne quad polarimetric SAR at P-band. BIOMASS is a polar orbiting satellite aiming primarily at deriving forest biophysical variables essential to the understanding of the carbon cycle.
The mission features a space segment (the satellite) and ground segment responsible for the planning, commanding, acquisition, processing, calibration and archiving of the BIOMASS data. The BIOMASS Payload Data Ground Segment (PDGS) implements the full processing chain from the Level-0 to the forest products that will be used by the scientists.
In order to further support the science behind the upcoming BIOMASS satellite mission, BIOMASS Ground segment will be complemented with a cloud-computing platform called Multi-Mission Algorithm and Analysis Platform (MAAP) currently under development. The MAAP is jointly developed with NASA such as to strengthen the scientific cooperation. It will provide high performing computing capabilities and algorithmic resources closer to the BIOMASS data (but also other satellites, airborne and in situ data)
To best ensure that users are able to collaborate across the platform and to access needed resources, the MAAP requires all data, algorithms, and software to conform to open access and open-source policies. As one such example of best collaborative and open-source practices, the BIOMASS data processing algorithms are developed on MAAP under the umbrella of an open-source scientific software project called BioPAL. In addition to aiding researchers, the MAAP will focus on sharing data, science algorithms and compute resources in order to foster and accelerate scientific research.
Index Terms— Biosphere, Data dissemination, Cloud computing, Open science, Open data, Open Source
1. INTRODUCTION
With the launch of new satellite missions and growing understanding of the complexity of ecological processes, the scientific community is faced with a unique and immediate need for improved data sharing and collaboration. This is especially evident in the Earth sciences and carbon monitoring community. While the new Earth Observation missions and the corresponding research leading up to launch, which includes airborne, field, and calibration/validation data collection and analyses, provide a wealth of data and information relating to global biomass estimation, they also present data storing, processing and sharing challenges. Due to the constraints of existing organizational infrastructures, these large data volumes will place accessibility limits on the scientific community and may ultimately impede scientific progress.
2. THE BIOMASS MISSION
Selected as European Space Agency’s seventh Earth Explorer in May 2013, the BIOMASS mission will provide crucial information about the state of our forests and how they are changing [1]. This mission is being designed to provide, for the first time from space, P-band Synthetic Aperture Radar measurements to determine the amount of biomass and carbon stored in forests [2]. The data will be used to further our knowledge of the role forests play in the carbon cycle.
3. BIOMASS GROUND SEGMENT ARCHITECTURE
The BIOMASS PDGS (Payload Data Ground Segment) is responsible for the planning, commanding, acquisition, processing, dissemination and archiving of the BIOMASS data.
The talk will give an overview of the PDGS architecture and will focus on the data processing aspects and the related technical budget..
It will in particular present the BIOMASS product family and further describes the processing model from the Level-0 to the user L3 products.
4. CONCEPT OF MISSION ALGORITHM AND ANALYSIS PLATFORM (MAAP)
In the context of innovative satellite missions and an evolving ground segment, the concept of Mission Algorithm and Analysis Platform dedicated to the BIOMASS mission is proposed [3]. This Mission Algorithm and Analysis Platform will be a virtual open and collaborative environment. The goal is to bring together a data centre (Earth Observation and non- Earth Observation data), computing resources and hosted processing, collaborative tools (processing tools, data mining tools, user tools, …), concurrent design and test bench functions, application shops and market place functionalities, accounting tools to manage resource utilisation, communication tools (social network) and documentation.
This platform will give the opportunity, for the first time, to build from a community of user of this new Earth Observation mission around this innovative concept.
5. BIOPAL: THE OPEN-SOURCE BIOMASS PROCESSOR
To best ensure that users are able to collaborate across the platform and to access needed resources, the MAAP requires all data, algorithms, and software to conform to open access and open-source policies. As an example of best collaborative and open-source practices, the BIOMASS Processing Suite (BPS) will be made openly available within the MAAP. This Processing Suite contains all elements to generate the BIOMASS upper-level data products and is currently in development under the umbrella of the open-source project called BioPAL [4]. BioPAL is developed in a coherent manner, putting a modular architecture and reproducible software design in place. BioPAL aims to factorize the development and testing of common elements across different BIOMASS processors. The architecture of this scientific software makes lower-level bricks and functionalities available through a well-documented Application Programming Interface (API) to foster the reuse and continuous development of processing algorithms from the BIOMASS user community.
6. OBJECTIVES OF THE MAAP PROJECT
The goal for the MAAP is to establish a collaboration framework between ESA and NASA to share data, science algorithms and compute resources in order to foster and accelerate scientific research conducted by NASA and ESA EO data users.
The objectives of the MAAP for the BIOMASS missions are to:
1) Enable researchers to easily discover, process, visualize and analyze large volumes of data from both agencies;
2) Provide a wide variety of data in the same coordinate reference frame to enable comparison, analysis, data evaluation, and data generation;
3) Provide a version-controlled science algorithm development environment that supports tools, co-located data and processing resources;
4) Address intellectual property and sharing challenges related to collaborative algorithm development and sharing of data and algorithms.
7. REFERENCES
[1] T. Le Toan, S. Quegan, M. Davidson, H. Balzter, P. Paillou, K. Papathanassiou, S. Plummer, F. Rocca, S. Saatchi, H. Shugart and L. Ulander, “The BIOMASS Mission: Mapping global forest biomass to better understand the terrestrial carbon cycle”, Remote Sensing of Environment, Vol. 115, No. 11, pp. 2850-2860, June 2011.
[2] T. Le Toan, A. Beaudoin, et al., “Relating forest biomass to SAR data”, IEEE Transactions on Geoscience and Remote Sensing, Vol. 30, No. 2, pp. 403-411, March 1992.
[3] Albinet, C., Whitehurst, A.S., Jewell, L.A. et al., “A Joint ESA-NASA Multi-mission Algorithm and Analysis Platform (MAAP) for Biomass, NISAR, and GEDI”, Surveys in Geophysics, 40, 1017–1027 (2019). https://doi.org/10.1007/s10712-019-09541-z
[4] BioPAL project site:
http://www.biopal.org
Biomass calibration concept towards mission operations
Philip Willemsen(a), Antonio Leanza(b), Adriano Carbone(c), Ernesto Imbembo(a), Björn Rommen(a), Michael Fehringer(a), Maktar Malik(a), Tristan Simon(a), Klaus Scipal(d)
a ESA-ESTEC, Noordwijk, The Netherlands
b SERCO B.V. for ESA-ESTEC, Noordwijk, The Netherlands
c Rhea System B.V. for ESA-ESTEC, Noordwijk, The Netherlands
d ESA-ESRIN, Frascati, Italy
The Biomass mission is an Earth Explorer Mission in the ESA Earth Observation Programme.
The primary objective of Biomass is to determine the worldwide distribution of forest above-ground biomass in order to reduce the major uncertainties in calculations of carbon stocks and fluxes associated with the terrestrial biosphere.
The Biomass Satellite industrial Prime contractor is Airbus Defence and Space Ltd. The radar instrument is built by Airbus Defence and Space, Friedrichshafen.
The Biomass satellite carries a fully-polarimetric (HH, VV, HV, VH) P-band SAR which operates in strip-map imaging mode with a carrier frequency of 435 MHz.
The Biomass satellite employs a unique design: the coverage and performance requirements at P-band imply the use of a large aperture antenna. This is realized by an offset-fed reflector antenna system consisting of the instrument feed array and a deployable reflector with a 12 m projected aperture.
The Biomass mission relies on a calibration transponder which is designed and developed specifically for Biomass mission needs. The Biomass Calibration Transponder (BCT) is a fully polarimetric active transponder working in P-band. It is used primarily during the commissioning phase for three applications:
1/ the Biomass satellite end-to-end antenna pattern characterisation, 2/ radiometric and polarimetric calibration and 3/ performance verification.
The BCT is provided by C-Core, Canada. It will be located at ESA’s New Norcia antenna site in Australia.
The antenna pattern knowledge is a necessary input for the on-ground data processing.
Due to the large size of the satellite in reflector-deployed configuration, it is not possible to characterise the end-to-end antenna pattern on-ground with sufficient accuracy with the available test facilities.
Only in-flight antenna pattern measurements can provide the required information with the sufficient accuracy.
To measure the antenna pattern in flight, the satellite is placed in a dedicated commissioning orbit with a defined orbit drift and a repeat cycle of 3 days. The orbit drift allows an antenna pattern characterisation at different azimuth cuts. In total, two months are dedicated to the antenna pattern characterisation which allows sufficient transponder overpasses providing doublet patterns measurements at different elevations.
Once the antenna pattern has been measured, the commissioning phase will continue with the radiometric, geometric and polarimetric calibration and performance verification activities.
As for the antenna characterisation, the calibration and verification activities will rely on the BCT. In addition, the potential of natural targets to support the cal/val activities is currently investigated.
The following system key performance requirements are driving the in-flight performance verification activities:
• Channel Imbalance and Cross Talk: for the accurate estimation of the channel imbalance and cross-talk, all four polarimetric scattering responses are required from a location with the same Faraday rotation.
• Radiometric Bias: the largest contribution to the radiometric bias budget is the external calibration bias of the reference target.
• Radiometric stability: the radiometric stability drives the number of observations of reference targets, and the stability of those targets.
• Residual Phase over pulse travel time: the residual phase stability requirement drives the short term phase stability requirements of the reference target.
Airbus Defence and Space GmbH and Hisdesat Servicios Estrategicos S.A. partnered to create the WorldSAR Constellation, offering premium X-band radar imaging with the additional benefits from a true constellation made of satellites with identical performance. The Constellation is composed of the PAZ mission from Hisdesat and the TerraSAR mission (defined by TerraSAR-X and TanDEM-X satellites) from Airbus.
The WorldSAR Constellation, operational since the launch of PAZ in 2018, has been possible thanks to commercial corporate investments together with public anchor customers and Public-Private-Partnerships in Germany and Spain.
As the missions are fully compatible, the Constellation delivers SAR data and InSAR layer stacks with double revisit rate –even across mission- and improved operational reliability.
Since the launch, several new features have been added to the offer to meet the growing demands of the end-user community –including Copernicus- like new imaging modes for premium imagery performance and improved timeliness. With respect to the last point,
in collaboration with service providers around the globe, a network of partner antennas has been set up in order to improve the access to the Constellation. This network continues to grow, and today enables Near Real Time access within Europe and a range of selected areas around the globe.
As unique feature, the TanDEM mission has provided the CopDEM used by Copernicus today; in addition, for CCM new features are being made available to provide an impression for a CopDEM Evolution: access to a new global 5 m WorldDEM Neo (update based on new data takes) and Elevation 1 based on Pleaides-NEO.
Growth plans for the Constellation future will also be presented.
Planet is a Copernicus Contributing Mission (CCM) with its three optical constellations: SkySat (VHR1), PlanetScope (VHR2) and with the archive of the retired RapidEye constellation (HR1).
The mission of Planet is to image the entire Earth every day, and make global change visible, accessible, and actionable. This mission is realized thanks to the operation of the world’s largest fleet of Earth-imaging satellites, with approximately 200 satellites in operation and over 500 designed and built to date.
Planet was founded in 2010, with an agile approach to designing, building, and operating the satellites, as well as the new ways in which the datasets are offered to the end users.
The CCM on-demand portfolio offers the key Planet’s products that complement Copernicus users especially in Emergency, Security and Land services.
PlanetScope is the largest constellation imaging Earth every day, capturing over 3 million images each day with 3-4m Ground Sampling Distance (GSD). For the newest generation of PlanetScope satellites 6 of the 8 spectral bands are the same as those offered by Sentinel-2, which makes these datasets perfectly complementary for use cases, where Sentinel-2 spatial and/or temporal resolution is not sufficient alone.
Due to the fact that PlanetScope’s fresh imagery is available almost daily for the entire landmass of the Earth since 2017, it is a perfect dataset to monitor events in a continuous manner over a longer period of time and the dataset that will be always available before events take place. PlanetScope gives the possibility to analyze long-term and rapid changes in all phases of emergency and security events, including:
- For risk reduction, PlaneScope data supports mitigation of disaster risk exposure with recent and accurate modelling, informed preventative measures, and efficient deployment of resources.
- Coordinating responses, it helps to increase situational awareness.
- In recovery planning, the data enables informed decisions about quick estimation of the extent of damage and overseeing short-term and long-term recovery efforts, including construction, revegetation, and repair of critical infrastructure and systems.
Last but not least, the high frequency of PlanetScope data can be used by Copernicus Land Monitoring Services for detailed land use and land change detection, even on a near daily basis.
Planet’s CCM portfolio also offers VHR1 SkySat constellation capabilities. These satellites operate from inclined and sun-synchronous orbits and are able to provide image products with 50cm pixel size. This constellation is capable of rapid revisit and can capture 6-7 images of a particular location on earth per day. The constellation serves at equator crossing times in the morning and in the afternoon. The afternoon crossing passes offer additional acquisition windows to Copernicus users, which is especially beneficial in security and emergency use cases, where the observation of evolution of events is of high importance. Currently, Copernicus users can select the date and time window of the acquisition.
Combining both Planet types of data together offers the option to implement Tip & Cue systems, leveraging PlanetScope’s wide area monitoring capabilities and complimenting them with SkySat data which provides a more detailed view over selected areas.
As a relatively new and innovative CCM product, Planet also offers to all Copernicus users the Planet Data View Service, which enables streaming and previewing of all available Planet datasets, to discover and visually assess needed Areas of Interest (AOIs) on a global scale.
Planet follows an agile aerospace approach, which means that the PlanetScope and SkySat constellations and datasets are constantly being improved and created to complement the existing offer. It also means that the Copernicus users always have access to the newest data products.
Pelican, a next-generation fleet of satellites for VHR imagery, begins launching in 2022 and will be operational in 2023. When fully operational, the Pelican constellation replenishes and upgrades Planet’s existing high resolution SkySat fleet with better spatial resolution, more frequent image revisit times, and reduced reaction time and latency.
For the expected evolution of the Copernicus Contributing Missions, Planet offers not only new data types (i.a. SkySat Video, SkySat Stereo images, Basemaps, Planet Fusion Product) but also new data delivery models, such as data subscriptions to specific AOIs (“Area Under Management”) and data bundles to make use of the data economies of scale and New Space capabilities.
Airbus is facing one of its most important periods since a decade, shaping our future with a constellation of 6 VHR satellites with 2 Pleiades at 50cm and 2 Pleiades Neo at 30cm resolution (and 2 more to come). We are preparing this new era, upgrading all our systems with an important human and technological investment. . From tasking our satellites to the delivery of our products, we are improving all the steps deeply linked to the information system and our digital cloud-based platform One Atlas.
Following the successful launch of the first two Pléiades Neo satellites in 2021, Europe now has autonomous and sovereign very high resolution optical capabilities with 30cm resolution.
Beyond the fact that it’s 100% European, what makes Pléiades Neo so unique? It consists of 4 EO satellites, providing 30cm optical imagery, entirely funded and operated by Airbus Defence and Space. After more than 30 years of experience in satellite imagery services, it seemed like the logical way forward. In addition, Pléiades Neo is also the result of a whole new approach in terms of image quality and satellite capability. It has required rethinking the way we design satellites and exploit their services to answer a growing demand for increasingly large areas, complex requirements as well as last minute adaptations in tasking according to weather conditions and angles. All of that whilst ensuring best in class resolution with impeccable image quality.
Highestprecision with massive acquisition
Firstly Pléiades Neo provides 30cm native resolution meaning that the image shot by the satellite is the actual image you receive in terms of resolution. The image therefore provides an incredible amount of details that don’t appear on lesser resolution imagery, for instance: you can make clearly see road markings, traces in the sand, cables on construction sites, details of what is being loaded on docks, even people gatherings, distinct animals and people thanks to their shadows. The geolocation accuracy which measures the exact position of an object on an image, is 5m CE90.
And if one considers precision isn’t enough, in terms of acquisition capacity, the constellation is able to acquire up to 2 million square kilometers, every single day. Two million square kilometers at 30cm resolution fully dedicated to customers, every day.
Introducing intraday revisit
It is also the first time Airbus provides an intra-day revisit capability within the same constellation. Indeed depending on the incidence angle of the satellite and the latitude of the Area Of Interest (AOI), Pléiades Neo can provide between 2 and 4 revisits per day. More particularly the tests that have been conducted over several areas and shown a minimum of 2 revisits per day and a maximum of 3, providing over 28 days a total of 64 revisits. And that’s just with two of the four satellites fully operational today.
Ultimate reactivity tasking and image delivery
Work plans are updated every time a satellite enters into S-band contact, be it every 25 minutes (an orbit is 100min, 1h40), or 15 times per day per satellite. It represents around 60 plans uploaded every day at the constellation level.
Work plan are also pooled. This means that when an image is to be collected by one satellite, the related acquisition request is removed from the tasking plans of the other satellites.
These multiple and synchronised work plans per day enable easy handling of last-minute tasking requests – which can be placed up to 15min before S-band contact- as well as integration of the latest weather information, for an improved data collection success rate.
In addition, Airbus Defence and Space’s network of ground receiving stations, enabling an all-orbit contact and thus ensuring near real-time performances worldwide and rapid data-access, ensure the highest standards in terms of reactivity of our service.
Images are downlinked at each orbit, automatically processed and quickly delivered to the customer, allowing faster response when facing emergency situations.
New spectral bands
In terms of spectral bands, Pléiades Neo will acquire simultaneously the panchromatic channels and 6 multispectral bands, which are:
- Deep Blue
- Blue
- Green
- Red
- Red-Edge
- Near Infrared
Red-Edge and Deep-Blue are two additional bands, compared to its predecessor Pléiades, which unveil complementary information for respectively vegetation and bathymetry applications. In urban environments, deep blue provides details of what is reflected in the shadows of the skyscrapers, therefore enabling far richer applications in working on e.g. smart cities and the monitoring of construction sites. Red-Edge can further enhance our understanding of vegetation and bring the processing of biophysical parameters to a completely new level and support the most efficient and respectful use of our precious natural resources on the Planet.
Finally, the tasking of a VHR satellite orbiting 600 km above the earth has never been easier. OneAtlas, our digital platform, allows the users to draw their AOI, choose Pléiades Neo as optical sensor and choose the date of acquisition while accessing the whole Airbus imagery archive.
By providing more data, more detailed, more rapidly and in a more accessible way, Pléiades Neo becomes the best support for Copernicus Contributing Missions.
GEOSAT-2, formerly known as DEIMOS-2, provides 75-cm resolution products with best-in-class accurate and top quality VHR imagery. The multi-spectral capability includes 4 channels in visible and near infrared spectral range (red, green, blue, and NIR) with a radiometric resolution of 10 bits, having both mono, tessellation and stereo-pair imaging modes.
GEOSAT-2 has been providing support to both CORE and ADDITIONAL requests for Copernicus program as Contributing Mission since its launch in 2014, with the aim of providing the maximum flexibility for the successful fulfilment of the Copernicus services objectives. All services proposed by GEOSAT have proved to be flexible to ESA requirements updates during all these years, including the PAT scenario for enhanced reactiveness and data availability to emergency situations.
Being one of the two European VHR contributing missions, GEOSAT-2 has contributed to the latest 3 cloud-free VHR optical coverage of 39 European States (EEA-39) acquired within predefined windows corresponding to the vegetation season, i.e. VHR_IMAGE_2015, VHR_IMAGE_2018 and VHR_IMAGE_2021. Additionally, several GEOSAT-2 datasets have been provided following the ESA operational requirements for standard and emergency VHR imagery, including provision of metadata compliant with INSPIRE directive. Complemented by a 24/7 Customer Service Desk provision, together with the Data Privacy, Issue and Security management associated to data provision, GEOSAT-2 will continue to offer a great value to Copernicus services needs during the forecoming extension phases of the CSCDA contract.
The extensive portfolio offered by GEOSAT benefits worldwide customers and partners by providing reliable solutions that significantly accelerate decision-making in a great variety of fields, from Land to Marine, and specially focused on Emergency services for a quasi-real time service data provision on a worldwide scale. The combination of a enhanced super-resolution product (up to 40cm) with reliable and swift value-added products delivered in less than 30 minutes would provide a unique set of services available for Copernicus services.
European Space Imaging (EUSI) has been a Copernicus Contributing Mission Entity since GMES Data Warehouse Phase I in 2011. The company operates its own multi-mission ground station at the German Aerospace Centre (DLR) near Munich (Germany) that enables EUSI to directly task the Maxar WorldView constellation of currently 4 optical very high resolution (VHR) satellite sensors with a spatial resolution of 0.5 to 0.3m at Nadir, as the satellites pass over Europe. The direct tasking capabilities allow for last-minute order entry with short lead times, as well as the consideration of the real time weather situation shortly before the satellite pass to maximise the acquisition yield in terms of cloud-free imagery per satellite revisit. Additionally, EUSI can directly downlink the performed acquisitions for rapid production and dissemination to its customers as fast as 30 minutes.
In partnership with Maxar, EUSI offers eligible Copernicus users access to worldwide tasking of the WorldView constellation and a global VHR satellite imagery archive of more than 125 petabytes, dating back as far as 1999. Both offerings are made on a 24/7 basis through a dedicated emergency ordering desk. Additionally, tasking and archive data requests from Copernicus users are managed at highest priority to ensure fast response and order turnaround times. Thanks to these premium services and space assets, European Space Imaging and Maxar have developed to a major provider of optical VHR remote sensing data with a resolution of 0.5m and better to the Copernicus programme over the past years, notably to the Copernicus core services for Security, Emergency Management and Land Monitoring.
The presentation will provide an overview of European Space Imaging’s VHR imaging capabilities as well as an outline of the company’s contribution to the Copernicus program so far, in particular major milestones and achievements with regards to service and earth observation (EO) data provision. Furthermore, EUSI’s ongoing contribution to the evolution of the CCM activity will be addressed. On the one hand, this evolution consists of the introduction of new EO data ordering and delivery interfaces, like a REST API and the adoption of the OGC Sensor Planning Service (SPS) protocol to further standardize and automate the EO data request management process. On the other hand, with the upcoming Maxar WorldView Legion satellite constellation new VHR imaging assets will be launched and also made available to the Copernicus programme and its users. The constellation, for which the first launch is currently planned for the first half of 2022, will be composed of 6 identical VHR imaging satellites with a panchromatic and 8 band multispectral instrument capable of collecting at a spatial resolution of 0.29m at Nadir. While 2 of the Legion satellites will be placed into conventional sun-synchronous orbits, the other 4 will fly in mid-inclined orbits to facilitate a revisit of the complete WorldView satellite constellation over the same target in mid-latitude areas of up to 15 times per day from morning to late afternoon. These new intra-day monitoring capabilities as well as the significantly increased large area mapping capacity at highest spatial resolution that the 6 Legion satellites are going to add to the existing WorldView constellation, will be outlined too.
Introduction
RADARSAT-2 has been a Copernicus Contributing Mission since 2011. During this time, RADARSAT-2 has acquired more than 24,000 images in support of Copernicus, mostly in support of Core Services, in particular for maritime and sea ice monitoring. RADARSAT-2 has also supported Copernicus Security Services and Emergency Management Services, including emergency activation in response to floods, storms and wildfires. The quality of RADARSAT-2 products and the reliability and operational responsiveness of the RA-DARSAT-2 system have been key reasons for the success of RADARSAT-2 as a Copernicus Contributing Mission.
Building upon the substantial heritage of the RADARSAT program, MDA is now developing SARNext, a next generation commercial SAR mission that will provide continuity for current users of RADARSAT-2, including Copernicus and other European customers, and better address emerging needs of the geointelligence market. SARNext combines best in breed features of RADARSAT-2 and the RADARSAT Constellation Mission (RCM) to provide enhanced capabilities to existing and new users. Significant SARNext innovations result in improved access, better revisit, broader swath coverage, lower noise, less data compression, faster data rates, and finer resolution. As of April 2022, SARNext development is approaching the mission level critical design review.
This paper will provide a high-level description of the current design of the SARNext mission including orbit, imaging geometry, modes, and operations. Some mission features described here may be subject to change.
SARNext Orbit
To better address strong commercial demand for SAR imagery at low to mid latitudes, SARNext will use a medium inclined (53.5) orbit rather than a near polar orbit, as used by many previous SAR missions. The orbit altitude (~600 km) is similar to that of RCM. A repeat cycle of just under 10 days has been selected to balance full access between ± 62.5° latitude (89% global access) with revisit time, incidence angle diversity and change detection latency. This medium inclined orbit is not sun-synchronous. The local time at nadir will decrease by about 20 minutes per day. The novel orbit inclination and varying local time will change how and when we observe the world with SAR.
The shift to an inclined orbit raises challenges on power and thermal management of the satellite payload and platform. These challenges are being resolved through both satellite design and concept of operations.
Imaging Geometry and modes
SARNext will provide left- and right-looking imaging with a 700 km accessible swath. Incidence angles range between 25 deg and 64 deg. Due to increased power, aperture and downlink, SARNext modes are designed to provide marked improvements in resolution, swath width and sensitivity as compared to equivalent RADARSAT-2 and RCM modes.
SARNext imaging modes include:
- ScanSAR modes designed specifically for detecting vessels of minimum length 50 m (500 km swath), 25 m (450 km) and 15 m (250 km) in sea state 5
- General purpose ScanSAR modes ranging between 30 m and 140 m resolution and 250 km and 700 km swath, for wide area marine applications such as oil spill detection
- Stripmap modes providing 8 m (120 km – 180 km), 5 m (100 km - 180 km) and 3 m (50 km) resolution
- A Spotlight mode providing 3 m × 1 m resolution (10 km × 7 km) [ground range x azimuth]
The attached Figure lists swath widths of these imaging modes. Single, dual and compact-polarisation will be available with all modes except for high incidence vessel detection modes, which are only available with single polarisation.
SARNext Operations
In keeping with RADARSAT heritage, SARNext will be used extensively for maritime surveillance and other time critical applications, for example, land intelligence and disaster response.
SARNext will support Near Real Time (NRT) applications with high reliability of tasking, acquisition, downlink, processing, image quality and delivery. The system design allows for fast tasking, and simultaneous imaging and downlink with guaranteed priority collections. SARNext tasking will make frequent use of left/right slews to better respond to customer orders. End users can task and receive the imagery they need when they need them.
SARNext will support Fast Tasking under one hour through the Canadian Headquarters System and an extensive network of Global Ground Stations. Downlink will also use this same network as well as dedicated client network stations. SARNext will provide a 15-minute Vessel Detection Service when in contact with any of the Global Ground Stations. A global AIS service will be integrated for NRT Dark Target Detection.
With 20 minutes of imaging capacity per orbit, SARNext is also capable of pre-planned systematic collections for monitoring applications (e.g., forestry, mining, pipelines) including interferometry. SARNext will be maintained in an orbital tube suitable for interferometric exploitation of stripmap and spotlight data. Given the wide coverage of stripmap modes we do not see a need for SARNext to support ScanSAR interferometry. Interferometric observations from the inclined SARNext orbit will be along line of sight vectors that are not available from near polar orbiting SARs. Hence, SARNext will measure novel and complementary surface movement components, potentially supporting full 3-D monitoring of surface movement.
Conclusion
SARNext will provide continuity of C-band SAR data for current RADARSAT-2 users with significant enhancements in terms of frequency and extent of coverage at mid to low latitudes, image quality and responsiveness of operations.
"The Antarctic and Greenland ice sheets are major contributors to today’s sea level budget
and their future evolution constitute the largest source of uncertainty when projecting global
sea level change under future emission and socio-economic pathways. Both ice sheets
interact strongly with neighbouring systems, reacting and impacting with the atmosphere and
ocean, affecting circulation, energy budget or bio-chemical cycles.
Understanding and replicating this complex and integrated system, and projecting its
behaviour under changing conditions, requires a vast quantity of observations, coupled with
powerful algorithms and numerical simulation combining artificial intelligence and physics.
It is in this context that the 4DAntarctica, 4DGreenland, Digital Twin Antarctica and Digital
Twin Greenland projects have been conceived, seeking to advance the current state of
knowledge on the hydrology of both the Greenland and Antarctic Ice Sheet, by capitalising
on the latest advances in Earth Observation data, AI, and numerical simulation. The digital
twin component of the two 4D projects are seeking to demonstrate how these various
elements can be consolidated into a dynamic, digital replica of our planet which accurately
mimics Earth’s behaviour and responds to the challenges set by Europe’s on the Destination
Earth initiative.
In this presentation we will synthesize the work performed as part of these 4 projects and
discuss specific avenues to further develop digital twins of the ice sheets, further exploiting
the synergies between observations, models and artificial intelligence."
"Ice temperature within the ice is a crucial characteristic to understand the Antarctic ice sheet evolution because temperature is coupled to ice flow. Since temperature is only measured at few locations in deep boreholes, we only rely on numerical modelling to assess ice sheet-wide temperature. However, the design of such models leads to a number of challenges. One important difficulty is that the temperature field strongly depends on the geothermal flux which is still poorly known (Burton-Johnson et al, 2020). Another point is that up to now there is no fully suitable model, especially for inverse approaches: i)analytical solutions are only valid in slowly flowing regions; ii)models solving only the heat equation by prescribing geometry and ice flow do not take into account the past changes in ice thickness and ice flow and do not couple ice flow and temperature. Conversely, 3D thermomechanical models that simulate the evolution of the ice sheet take into account all the relevant processes but they are too computationally expensive to be used in inverse approaches. Moreover, they do not provide a perfect fit between observed and simulated geometry (ice thickness, surface elevation) for the present-day ice sheets and this affects the simulated temperature field. In order to speed-up the simulation of present-day ice temperature a numerical emulator based on deep neural network (DNN), of the thermomechanically coupled ice sheet models GRISLI (Quiquet et al. 2018) has been developed. We use GRISLI outputs that come from 4 simulations, each covers 900000 years (8 glacial-interglacial cycles) to get rid of the initial configuration influence. The simulations differ by the geothermal flux map used as boundary condition. Finally, a database is built where each ice column for each simulation is a sample used to train the DNN. For each sample, the input layer (precursor) is a vector of the present-day characteristics: ice thickness, surface temperature, geothermal flux, accumulation rate, surface velocity and surface slope. The predicted output (output layer) is the vertical profile of temperature. In the training, the weights of the network are optimized by comparison with the GRISLI temperature.
The first results are very encouraging with a RMSE of ~ 0.6 °C .The computational time of GRISLI-DNN for generating temperature field of whole Antarctica (16000 columns) is about 20 s.
The first application of the emulator is to associate it to ESA’s SMOS satellite observations to infer the 3D temperature field and improve our knowledge of geothermal flux. Indeed, it has been shown that SMOS data, coupled with glaciological and electromagnetic models, give an indication of temperature in the upper 1000 m of the ice sheet. Studies to use the same concept to lower microwave frequencies (i.e. 0.4 GHz) and expand the temperature retrieval capabilities are also on-going. Moreover, the emulator could also be used for initialization of computationally expensive ice sheet models."
Beneath the ice sheets of Greenland and Antarctica lies an extensive hydrological system, which includes networks of often highly dynamic and interconnected subglacial lakes. These lakes have the capacity to store, and episodically release, meltwater, thereby modulating the flow of water beneath the ice and, at times, altering the dynamics of the overlying ice sheet itself. Subglacial lakes beneath the Greenland Ice Sheet are of particular interest because, unlike their Antarctic counterparts, they are likely to be more closely connected to the surface hydrological system. Thus, as Earth’s climate warms, generating increasing fluxes of surface meltwater, it may be expected that the distribution and dynamics of Greenland’s subglacial lakes may also evolve too. Whilst the network of Antarctic subglacial lakes is relatively well studied, very little is known about the existence and dynamics of lakes beneath the Greenland Ice Sheet. Despite theoretical predictions suggesting that more than 1600 lakes may exist, only a tiny number (64) have been identified to date, with less than 10 of these having been observed to actively discharge water. This paucity of observations is primarily due to the smaller size of Greenlandic subglacial lakes, which presents an observational challenge for traditional altimetry-based satellite techniques, that offer relatively coarse resolution or spatial sampling. Here we assess the use of new streams of high resolution satellite data to reveal insight into Greenland’s subglacial lakes, and consider the potential for these datasets to inform subglacial hydrological models within a future Digital Twin of Greenland. Specifically, we present the results and lessons learned from a pilot study that analysed 35,000 super high resolution (2 metre) Digital Elevation Models to search for signatures of subglacial lake drainage and filling. Additionally, we consider the value added by complementary data streams, including estimates of surface deformation derived from Synthetic Aperture Radar imagery and altimetry, to consider how a future Greenland Digital Twin may leverage large and diverse datasets in order to inform models and, in turn, deliver new insight into Greenland’s elusive subglacial hydrological system.
The Antarctic Ice sheet is a key component of the Earth system, impacting on global sea level, ocean circulation and atmospheric processes. Meltwater is generated at the ice sheet base primarily by geothermal heating and friction associated with ice flow, and this feeds a vast network of lakes and rivers creating a unique hydrological environment. Subglacial lakes play a fundamental role in the Antarctic ice sheet hydrological system because outbursts from ‘active’ lakes can trigger, (i) change in ice speed, (ii) a burst of freshwater input into the ocean which generates buoyant meltwater plumes, and (iii) evolution of glacial landforms and sub-glacial habitats. Despite the key role that sub-glacial hydrology plays on the ice sheet environment, there are limited observations of repeat sub-glacial lake activity resulting in poor knowledge of the timing and frequency of these events. Even rarer are examples of interconnected lake activity, where the draining of one lake triggers filling of another. Observations of this nature help us better characterise these events and the impact they may have on Antarctica’s hydrological budget, and will advance our knowledge of the physical mechanism responsible for triggering this activity. In this study we analyse 9-years of CryoSat-2 radar altimetry data, to investigate a newly identified sub-glacial network in the Amery basin, East Antarctica. CryoSat-2 data was processed in ‘swath mode’, increasing the density of elevation measurements across the study area. The plane fit method was employed in 500 m by 500 m grid cells, to measure surface elevation change at relatively high spatial resolution. We identified a network of 10 active subglacial lakes in the Amery basin. 7 of these lakes, located below Lambert Glacier, show interconnected hydrological behaviour, with filling and drainage events throughout the study period. We observed ice surface height change of up to 6 meters on multiple lakes, and these observations were validated by independently acquired TanDEM-X DEM differencing. We then use these observations in conjunction with simulation of the melt water production and transport at the base of the ice sheet and across the grounding line to constrain sub-glacial fluxes and explore its impact on ocean circulation and ice-ocean interaction. This case study is an important decade long record of hydrological activity beneath the Antarctic Ice Sheet which demonstrates the importance of high resolution swath mode measurements. In the future the Lambert lake network will be used to better understand the filling and draining life cycle of sub-glacial hydrological activity under the Antarctic Ice Sheet.
"Around the periphery of the Greenland and Antarctic Ice Sheets, networks of supraglacial lakes and streams form each summer, in response to seasonal surface melting. The nature, extent and dynamics of these surface hydrological systems are important because it affects the transport of freshwater towards the coast, and can impact upon factors such as ice dynamics. With the launch of operational missions carrying optical sensors, such as Sentinel-2, there is the opportunity to monitor this system at weekly periodicity around the entirety of the ice sheet margins each summer. However, conventional mapping approaches require extensive manual post-processing to remove false positives, which makes them infeasible within the context of a Digital Twin.
Here, we consider more automated approaches, which have been investigated within the 4D Greenland and 4D Antarctica studies, and that are better suited for implementation within a future Digital Twin. Specifically, we evaluate the potential of Machine Learning approaches, including a Random Forest algorithm, which are trained to classify surface water from non-water features in a pixel-based classification. We will assess their performance relative to conventional approaches, including their spatial and temporal transferability, and investigate performance across the margin of both the Greenland and Antarctic Ice Sheets. Finally, we shall consider the computational requirements and lessons learnt during this work, within the context of a future Ice Sheet Digital Twin."
"The ESA 4DGreenland project has the objective of performing an integrated assessment of Greenland’s hydrology through maximizing the use of Earth observation (EO) data. Not all aspects of Greenland hydrology are observable from EO, and models are still required to close the integrated assessment. Here, we give initial thoughts on how we can build on the vast observational datasets generated within 4DGreenland and progress towards a digital twin of the Greenland ice sheet to bridge the gap between models and EO data.
The diversity of the gathered EO dataset provides an ideal playground for investigating hidden features within the data using AI or providing needed standardized training data for feature modeling. One such example, is the use of auxiliary EO datasets in the conversion of radar altimeter derived volume change of the Greenland ice sheet into mass balance. This study showed the prospects of data fusion to elevate our knowledge of ice sheet behavior and relay less on numerical earth system models. In this presentation we outline the requirements for such a digital twin, and how we foresee that EO data and models can be combined to fulfil those requirements.
"
The European Organization for the Exploitation of Meteorological Satellites (EUMETSAT) provides a prototype Data Cube for Drought and Vegetation Monitoring. This prototype consists of long-term data records on a regular latitude / longitude grid in CF-compliant netCDF, using a model consistent with the Copernicus Climate Change Service Data Store and is provided via THREDDS. Together with this data cube tools to manipulate the data in the cube are provided.
The prototype seeks to explore how well EUMETSAT and partners can bring together data from multiple sources and from multiple grids to ease barriers to use of the data for thematic applications.
The cube was created using the EUMETSAT Data Tailor (https://www.eumetsat.int/data-tailor) and includes parameters for drought and vegetation monitoring: various vegetation parameter (NDVI, Fractional Vegetation Cover, Leaf Area Index, Fraction of absorbed photosynthetically active radiation), global radiation, direct normalized solar radiation, sunshine duration, land surface temperature, reference evapotranspiration, soil wetness index in the root zone, precipitation, and 2-m air temperature. These data come from the portfolio of EUMETSATs Satellite Application Facilities (SAF) on Climate Monitoring (CM SAF), Land Surface Applications (LSA SAF) and Support to Hydrology and Water Management (H SAF), as well as data provided by the Global Precipitation Climatology Centre (GPCC) and ERA-5 data from Copernicus / European Centre for Medium Range Forecasting (ECMWF). The time period covered by each of the data records differs, as they have different starting dates. However, the earliest available starting date has been chosen for each of the data records. As this is a static cube, it has a defined end date and is not updated with near-real time data.
It takes effort to make such cubes – is it worth it and what is the future? This presentation reports on the lessons learnt as regards the creation, provision and use of the data cube. We demonstrate the tools that are provided together with this cube and summarize some first user feedback and share our ideas for the future.
The available information on our environment derived from Earth Observation and other sources like mathematical models and in situ measurements has been ever growing with no apparent slowdown in sight. While this data richness untaps unprecedented research possibilities and allows for a holistic understanding of our planet, it also necessitates new technological approaches for the joint exploitation of numerous data streams. Despite considerable efforts for standardisation, data formats and models as well as interfaces for data access remain diverse, requiring costly solutions for harmonisation to be developed and maintained. The open-source Python package xcube addresses such requirements and offers comprehensive tools for transforming arbitrary data sets into analysis-ready data cubes as well as a growing suite of tools for their exploitation.
xcube provides a plugin-based store framework to integrate data sources served via web APIs or via different storage types. By this means, lazy data cube views resembling the Common Data Model, well-known from NetCDF, are created and greatly facilitate convenient on-the-fly access to large data repositories. Data cubes can be persisted by using xcube Generator, which allows for tailored configurations for the transformation process including the application of arbitrary source code to cube variables. xcube is based on Python’s popular data science stack, particularly on xarray, dask, and zarr, and extends the functionalities of these packages by methods for typical operations on geographical data cubes. Examples include processing or masking and clipping also with arbitrary vector shapes.
The xcube ecosystem offers far more than data access and transformation functionality. xcube Server can provide tiles or time-series from data cubes, both as images (also according to OGC WMTS) or raw data through an S3-compatile REST API, facilitating the integration into existing applications and workflows. The most widely used application is currently xcube Viewer, a web viewer with image and time-series visualisations and a functionality to visualise the results of simple processing workflows on-the-fly. cate, the web-based toolbox of ESA’s Climate Change Initiative, uses xcube in the back end as well, leveraging the software’s outstanding capabilities to handle large, gridded data sets.
We will present here several successful activities and real-life examples from different application areas, which all rely on the xcube ecosystem to reach their objectives. Examples include the Euro Data Cube, which offers generic, automated services to everyone, Brockmann Consult’s operational water quality services for various European institutions and companies, and several research activities using xcube for their processing and machine learning tasks. The presentation will conclude with a glimpse on xcube’s packed roadmap for the coming months.
Earth System Science (ESS), as the name suggests, adopts a holistic approach to describing, understanding, and even predicting the dynamics of the Planet’s complex system. ESS is truly interdisciplinary involving numerous natural and social sciences and, as a quantitative discipline, very data hungry. The unprecedented growth in available data on our environment from Earth Observation and other sources made possible many exciting research approaches, but at the same time also led to an ever-increasing effort required for establishing data access and making heterogeneous data stream ready for joint analysis. As a consequence, researchers see themselves confronted with challenging engineering tasks in addition to the scientific challenges.
The “Earth System Data Lab” (ESDL), an ESA activity recently completed, has been addressing this issue by offering a comprehensive Earth System Data Cubes with tens of relevant data for ESS. In addition, a virtual laboratory offered a ready-to-use environment for analysis and processing of the data cube. Four sophisticated use case have been successfully implemented demonstrating the wide range of applications enabled by the ESDL approach. Likewise, a group of Early Adopters from different disciplines have been implementing self-consistent projects, sometimes involving the more than 80 variables in the cube, while others only used a small subset of the data offer.
Besides major technical developments, the main achievement of the project has clearly been the scientific output. Numerous presentations and 14 manuscripts from various researchers have been prepared, submitted, or accepted at the end of the contract and more followed after the end of the activity. This success underlines the scientific potential to be unleashed by removing major technical obstacles in the research process. At the end of the activity, the lesson’s learned have been clear, also thanks to valuable feedback from the heterogeneous user community. While the strict data cube approach adopted in ESDL, i.e., one cube with a static grid, has clear advantages for specific empirical approaches, it is too rigid and involves considerable modifications to the original data. Several users therefore asked for customisable cube generation, in terms of data to be included, the target grid, pre-processing algorithms to be applied and other aspects. Also, the long-term perspective has been a frequent question from users when facing a decision for an infrastructure for their research. In terms of scientific evolution, clearly the application of state-of-the-art deep learning approaches to ESS questions will be needed in the future. The service will then need to be optimised to better support such applications. These lessons learned have been favourably received by the Agency and included as requirements in a recently closed tender named Deep ESDL, which will perpetuate the success story of data cube in Earth System Sciences.
With a steadily increasing volume of freely available satellite imagery, novel solutions like Earth Observation Data Cubes (EODC) and Analysis Ready Data (ARD) are growing in importance. These topics are not only accompanied by an evolving ecosystem of open tools and standards, but also a heightened interest of users to explore the data by performing dense time series analyses over various spatial scales. While commercial cloud-based platforms can offer analysis on national to global scales, leveraging already available computing resources (e.g., a university’s High Performance Computing system) to perform analysis over regional areas of interest will continue to be of considerable importance in the near future for many users.
Even if necessary computing resources are available, the processing of Earth Observation data to a high-quality level suitable for long time series analysis requires expert knowledge, which is particularly true for Synthetic Aperture Radar (SAR) data. Image artifacts remaining after single image processing will contaminate time series and are difficult, if not impossible, to be removed in multi-dimensional ARD cubes. Additionally, the choice of spatial reference system, pixel grid, resampling, file format and image tiling play a large role in optimally storing data for efficient access and computations. However, if these challenges are overcome, an increased statistical robustness of numerous applications is possible, such as the derivation of forest cover [1], mapping of wetland characteristics [2] and land cover seasonality characterization [3]. Moreover, the development of more complex filtering approaches preserving a higher level of spatial detail than previous methods is possible [4].
In particular, the continuously growing SAR user community can greatly benefit from high-quality ARD, such as the newly proposed Sentinel-1 Normalised Radar Backscatter (S1-NRB) product, which is intended to be a global and consistently processed ARD product that aligns with the NRB specification [5] proposed by the CEOS Analysis Ready Data for Land (CARD4L) initiative. It offers high-quality, radiometrically enhanced SAR backscatter data as well as ancillary data layers, conversion layers for different backscatter conventions and extensive metadata. Furthermore, the S1-NRB product implements technological developments, such as Cloud Optimized GeoTIFF (COG) and SpatioTemporal Asset Catalog (STAC).
The availability of an ARD product can greatly accelerate data preparation, but users face various challenges regarding the analysis of multi-temporal data cubes. During analysis of SAR time series, for example, the question of time series composition eventually arises regardless of application. Selected data acquisition characteristics, like orbit and incident angle, are often limiting possible mapping applications. The time series of an individual pixel can include measurements situated in near, mid or far range of each SAR scene in the stack, depending on the respective acquisition orbit. These phenomena need to be accounted for during analysis and thus a tradeoff between temporal density and variability is often inevitable.
As part of assessing the quality and handling of the S1-NRB product, the variability of backscatter time series was systematically quantified over different land cover classes for an area of regional scale with the aim to guide future SAR data cube users to the choice of data applicable to their individual use cases. Different combinations of, amongst others, acquisition orbit, track and frame, were investigated by computing multi-temporal statistics and quantifying their differences. We intend to present the most important results of this study, and furthermore, to show how a collection of S1-NRB scenes can easily be accessed as an on-the-fly data cube.
[1] Heckel, K., Urban, M., Schratz, P., Mahecha, M.D., & Schmullius, C. (2020). Predicting Forest Cover in Distinct Ecosystems: The Potential of Multi-Source Sentinel-1 and -2 Data Fusion. Remote Sensing, 12. https://doi.org/10.3390/rs12020302.
[2] Slagter, B., Tsendbazar, N.-E., Vollrath, A., Reiche, J. (2020). Mapping wetland characteristics using temporally dense Sentinel-1 and Sentinel-2 data: A case study in the St. Lucia wetlands, South Africa. International Journal of Applied Earth Observation and Geoinformation, 86. https://doi.org/10.1016/j.jag.2019.102009.
[3] Dubois, C., Mueller, M.M., Pathe, C., Jagdhuber, T., Cremer, F., Thiel, C., & Schmullius, C. (2020). Characterization of Land Cover Seasonality in Sentinel-1 Time Series Data. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, V-3-2020, 97-104. https://doi.org/10.5194/isprs-annals-V-3-2020-97-2020.
[4] Cremer, F., Urbazaev, M., Berger, C., Mahecha, M.D., Schmullius, C., & Thiel, C. (2018). An Image Transform Based on Temporal Decomposition. IEEE Geoscience and Remote Sensing Letters, 15, 537-541. https://doi.org/10.1109/LGRS.2018.2791658.
[5] CEOS (2021). Analysis Ready Data For Land: Normalised Radar Backscatter. Version 5.5. https://ceos.org/ard/files/PFS/NRB/v5.5/CARD4L-PFS_NRB_v5.5.pdf.
The global deterioration of the quality of the air we breathe is a pressing concern for citizens, scientists and policymakers as it has been identified as a major threat to health and climate. The scientific community has agreed on the fact that the worsening of air quality is responsible for around 9% of yearly deaths worldwide. In this context, the understanding of the physicochemical dynamics of trace gases and air pollutants, the analysis of emissions from both human and natural processes, and continuous monitoring of pollutant ambient concentrations represent key tasks to achieve higher ambient air quality conditions. As requested by the United Nations Agenda 2030 through the Sustainable Development Goals (SDGs) (e.g. 3, 11 and 13 that addressed directly well-being, sustainable communities and life on earth).
Nowadays, measurements from both ground and satellite sensors are employed in air quality monitoring and analysis. The combination of these data sources, rather than the sole use of traditional ground-sensors observations, has allowed scientists to study air pollution in areas where sensors are not present. Combining ground and satellite sensor observations in air quality studies can be puzzling due to the time and effort required for practitioners to integrate and manipulate - generally - large volumes of heterogeneous data. However, innovative data exploitation tools, supporting these intensive handling and computational operations, have today reached a level of maturity that allows for complex environmental analysis tasks such as the concurrent use of ground and satellite sensor observations in air quality monitoring.
An example of the above is the Data Cube. Data cubes are infrastructures designed to store multi-layered dataframes that can provide information to the user in a homogeneous format. When the data is organized and accessed in data cubes, integration time-effort compared to preprocessing procedures is drastically reduced. This allows the user to concentrate and spend more time on post-processing and analysis. One of the most popular and data cube implementations is the Open Data Cube (ODC, https://www.opendatacube.org). ODC provides facilities that act as an intermediary layer between satellite Earth Observation (EO) data and end-users. ODC is an open-source software released under the Apache 2.0 license, consisting of a set of Python tools that help the user explore and interact with satellite data. The software provides command-line applications, built-in statistical analysis tools, a Web User Interface, a graphical data explorer and support to Jupyter Notebooks which works as an interface allowing users to develop custom applications. Furthermore, ODC supports the Open Geospatial Consortium (OGC) standards for data publishing, thus allowing integration into most of the geospatial software frameworks. Currently, more than 100 countries are developing national data cubes platforms based on ODC. Successful examples for ODC implementations are, for example, the Digital Earth Australia (https://www.ga.gov.au/dea), the Digital Earth Africa (https://www.digitalearthafrica.org), the Swiss Data Cube (https://www.swissdatacube.org) and the Vietnam Open Data Cube (http://datacube.vn).
The ODC is a tool originally designed for aiding access and analysis of satellite data. One of the main drawbacks is that current deployments only ingest Analysis Ready Data (ARD) products from a few satellite platforms such as the Sentinel-1, Sentinel-2 and Landsat-8. In order to empower ODC applications with additional satellites and, eventually, ancillary data sources, development work is required to adapt OCD routines. With this in mind, the present work proposes the design and the implementation of an ingestion pipeline that systematically indexes non-ARD data from satellites currently not supported by the ODC, and also ancillary data such as ground-sensors observations. The practical use of the developed ODC implementation is then tested to compute correlation analysis between ingested satellite and ground sensor observations.
As a case study, this work focuses on air quality data from the Sentinel-5P (the most recent Earth Observation platforms of the European Copernicus Programme providing estimates of air pollutants with daily global coverage) and traditional geolocated time-series provided by air quality ground stations. The selected study area was the Lombardy region (Northern Italy). The Lombardy region is one of Europe's most densely inhabited places, and it suffers from severe air pollution. This region is also a pollution hotspot due to its unique micro-climatic characteristics, which include wind channelling along the Po-River valley and frequent thermal inversions in mountainous places, which prevent pollutants from dispersing properly in the lower atmosphere. Air quality ground observations for the Lombardy region are provided by the Lombardy Regional Environmental Protection Agency (ARPA) which manages the local authoritative environmental sensors network. Nitrogen Dioxide (NO2) was selected as the target pollutant as tropospheric estimates are provided by the Sentinel-5P as well as its concentration records are available from the ARPA sensors. Furthermore, NO2 emissions in the lower atmosphere are mainly connected to combustion processes from domestic heating, transportation and industrial activities which are largely present in the study area.
An example of correlation analysis between the ARPA Lombardia and the Sentinel-5P NO2 data was performed by leveraging exclusive the generated ODC products as follows. The datasets were extracted from the ODC by using the Python XArray library and overlaid into a single Pandas dataframe. The analysis covered a period from January 1st 2020 to April 14th 2021. Correlation coefficients were computed for co-located time-series on Sentinel-5P and ground sensor observations. Results of the correlation analysis demonstrated a strong positive correlation between measurements. The Pearson correlation coefficient (rp) between measurements had a mean larger than 0.7 and a similar Spearman correlation coefficient (rs). As a complement to the satellite and ground sensor correlation, the wind speed dataset measurements were also integrated into the ODC. This had as an objective to increase the understanding of the dynamics of NO2 according to the meteorological conditions. The wind dataset was obtained from measurements performed by the ARPA Lombardia network weather stations. Correlation results between Sentinel-5P measurements and ARPA Lombardia wind speed measurements show a weak positive correlation (rp = 0.2). As a result, for this work, alternative time periods were tested (e.g. calculating average wind speed in the study points for 12 hour periods). Additionally, seasonality was removed from the results, slightly improving the overall correlation.
On one hand, the results of this work prove the feasibility of integrating non-ARD into the ODC by using the developed Python-ODC pipeline. On the other hand, the conducted numerical experiments reinforce the importance of developing geospatial software architectures capable of managing heterogeneous data formats, with a particular focus on satellite and ground observations which are critical to analysing complex phenomena such as air quality.
Future work will aim at integrating other air quality metrics and meteorological datasets that were not considered in this study. These additional data will be ingested into the ODC to complement existing cube layers. Simultaneously, the operational use of the offered tools and data will be examined in collaboration with local stakeholders, including ARPA Lombardia. Questions about the computer infrastructure needed to enable the creation and publication of the ODC instance will be addressed as well, ensuring that end-users have remote access to an extensive amount of data and analytic tools.
The plasmapause (PP), the outer boundary of the plasmasphere is coupled to the ionosphere. PP is a key factor for a number of space weather processes as it separates various plasma, wave populations, as well as regions dominated by co-rotation and convection electric fields. Magnetospheric convection electric field maps down onto the ionosphere along the geomagnetic field lines of close to infinite conductivity. The night side sub-auroral ionosphere also has a key role in forming the new PP. Poleward of the plasmapause footprint the ionospheric conductivity is enhanced by the particle precipitation from the magnetosphere. In the sub-auroral ionosphere, the conductivity is lower. To maintain current continuity through this lower conductivity zone, the electric field increases contributing to plasmasphere erosion. Plasmasphere dynamics cannot be fully understood without understanding the simultaneous processes in the underlying ionosphere.
We are developing a series of tools based on multipoint ground and space observations to monitor plasmasphere dynamics. Swarm observations made at low-Earth-orbit (LEO) play a key role in the envisaged monitoring system. Swarm provides observations and data products characterising various ionospheric phenomena (namely the mid-latitude ionospheric trough and the boundary of small-scale field-aligned currents) that are linked directly to the plasmapause dynamics. The advantage of the LEO observations is their cadence. Spacecraft on a polar LEO orbit crosses the PP around 64 times daily. Due to several limiting factors (e.g. seasonal and diurnal variation of the observed phenomena) around 20% of the true crossings result in successful detections. Even so, since the typical timescale of PP dynamics is of the order of hours, we are able to recover the main features in PP variation. The observed ionospheric boundaries are used to derive a proxy for the midnight plasmapause position (MPP), while MPP helps improve empirical PP and plasmasphere models, and the nowcast and forecast of the state of the plasmasphere.
In this talk, we present the first results of the ESA SSA project PLASMA that aims at developing products on the plasmasphere based on multi-point space and ground observations. This effort could benefit from the continued observations of Swarm. At present, Swarm is the only source of fully automated PP monitoring. The planned near-real-time availability of Swarm data would be a substantial step toward improved space weather forecasting. It would be also important to extend the observational capability both spatially and in time (e.g. by missions like CSES or NanoMagSat).
The Earth's ionosphere is home to numerous electric current systems driven
by the global wind dynamo, gravity, pressure gradients, and coupling to
the magnetosphere. Each of these current systems exhibits rich
and complex structure on a wide range of both temporal and spatial scales.
Improved understanding of the ionospheric current system enables us to
predict the effects of space weather events, interpret magnetic field
observations on ground and in space, and probe the conductivity structures of
the deep Earth interior.
Over the years, researchers have attempted to understand and model the
global ionospheric current system using both data-based and physics-based
methods. Data-based approaches have used ground and/or space observations
of the magnetic perturbations caused by the electric currents, in order
to invert for the sources, typically using simplified models of current flow
geometries, or using generic basis functions such as spherical harmonics.
Physics-based methods have culminated in state-of-the-art models, such
as the Thermosphere Ionosphere Electrodynamics General Circulation Model
(TIEGCM), which self consistently simulates both the dynamics, energetics and chemistry, and
electrodynamics of the upper atmosphere. The ionosphere is not an isolated
system, and so advanced models like TIEGCM must be driven by inputs
representing the dynamics from the lower atmosphere and the
high-latitude energy input from the magnetosphere coupled to
the ionosphere and thermosphere. Some research studies have combined
observations with self-consistent modeling, developing data assimilation methods to drive
physics-based models with real-time observations from ground and in
space.
The work we describe here is a data assimilation method which focuses
on modeling the ionospheric currents in the diurnal frequency band.
We utilize a one year TIEGCM simulation to construct a set of spatial
modes which capture the salient 3D structures of the major ionospheric
current systems. These spatial modes are then combined with a set of
temporal modes derived from the ground observatory network over a 20
year time frame, in order to build a time-continuous 3D model of the
global ionospheric current system in the diurnal variation band. This
is, in effect, a 4D model of the ionospheric currents and their associated
magnetic fields spanning two decades. We will report on the methodology
used to build our model, as well as the main features of the model. We
will also discuss possible extensions of our method to other frequency bands,
in order to capture, for example, higher frequency variations which could
occur during geomagnetic storms.
Space weather global and coupled codes allow to test our physical understanding of the coupled solar wind—magnetosphere-ionosphere system. Scientific and operational codes also allow to validate hypotheses and state new scientific questions. In this presentation we report on a number of recently funded EU and Helmholtz association projects in Germany, aimed at describing this complex coupled system, improving predictive capabilities and combining models and observations. Funding for projects PAGER, PROGRESS, SWAMI is provided by H2020 and SIM and MAP projects are funded through the Helmholtz AI cooperation unit. In this presentation, we discuss two of these projects, PAGER and MAP.
The Prediction of Adverse effects of Geomagnetic storms and Energetic Radiation (PAGER) project aims to provide space weather predictions that will be initiated from observations on the Sun and to predict radiation in space and its effects on satellite infrastructure. Real-time predictions and a historical record of the dynamics of the cold plasma density and ring current allow for evaluation of surface charging, and predictions of the relativistic electron fluxes will allow for the evaluation of deep dielectric charging. The project aims to provide a 1-2 day probabilistic forecast of ring current and radiation belt environments, which will allow satellite operators to respond to predictions that present a significant threat. As a backbone of the project, we use the most advanced codes that currently exist and adapt existing codes to perform ensemble simulations and uncertainty quantifications. This project includes a number of innovative tools including data assimilation and uncertainty quantification, new models of near-Earth electromagnetic wave environment, ensemble predictions of solar wind parameters at L 1, and a data-driven forecast of the geomagnetic Kp index and plasma density. The developed codes may be used in the future for realistic modelling of extreme space weather events.
Satellite technology and in particular GPS- or GNSS-based systems are becoming vital for our society. Plasma density structures in the near-Earth space can significantly influence the propagation of GPS signals and hence influence the accuracy of GPS navigation. Moreover, space plasmas can also damage satellites. To carefully evaluate these effects of the space environment, it is important to develop an accurate model of the plasma density based on a variety of direct and indirect measurements. The MAchine learning-based Plasma density model (MAP) project is demonstrating how machine learning tools can be used to produce a real-time global empirical model of the near-Earth plasma density based on a variety of measurements.
The ionosphere contains a wide variety of plasma density structures, known as irregularities, whose properties impact the propagation of high-frequency radiation, such as radio waves. Resolving the spatial-scales of these irregularities, and thus the drivers of irregularities, is critical to forecasting space weather and its impact. Incoherent Scatter Radars (ISRs) are able to provide plasma density measurements over a relatively wide field-of-view, making them ideal to probe plasma density variations. However, resolving the spatial-scales of irregularities with these instruments can still be challenging. For example, to resolve plasma density spatial-scales at a single altitude, steerable dish ISRs require a long duration over which to perform a scan, making it challenging to interpret observations within the radar field-of-view. Meanwhile, even though Advanced Modular Incoherent Scatter Radars (AMISRs) can probe multiple locations nearly simultaneously, the spatial separation between different locations measured at the same altitude can be quite large, making it challenging to probe structures down to 10s of kilometers.
Here, a novel technique for resolving high-latitude ionospheric irregularity spectra at a higher spatial-temporal resolution than has been previously possible with ISRs has been developed by leveraging: 1) the ability of phased array AMISR technology to collect volumetric measurements of plasma density, 2) the slow F-region cross-field plasma diffusion at scales greater than 10 km, and 3) that high-latitude geomagnetic field lines are nearly vertical. By applying this technique to high-latitude AMISRs, we can resolve the spatial-scales of irregularities in relation to different solar and geomagnetic parameters. In this presentation, we apply this technique to ISR observations from 2016 to 2018, focusing on spatial-scale variations between 20 and 110 km. We first present case studies showing the impact of solar and magnetospheric events on the ionosphere, such as a February 15 2018 Coronal Mass Ejection event which impacts plasma density structuring at spatial scales 100 km and greater, while scales between 100 km and 20 km are barely impacted. We then present statistical studies showing, among other things, an increase in the dominant spatial-scales at noon and a decrease in the dominant spatial-scales near midnight. This presentation will expand on this dataset and discuss the future goals of this work.
The plasma in the cusp ionosphere is subject to auroral particle precipitation, which is thought to be an important source of plasma irregularieties at large scales and long lifetimes. These irregularities can be broken down to smaller scale structures which have been been linked to scintillations, i.e. fluctuations in transionospheric radio waves. We present power spectra of plasma irregularities found in the polar cusp ionosphere for regions with and without electron precipitation. Our analysis is based on the in-situ measurements from the Twin Rockets to Investigate Cusp Electrodynamics-2 (TRICE-2) mission, consisting of two sounding rockets flying simultaneously at two different altitudes, and from the Swarm mission. We used both the 16-Hz sampling rate electron density measurements from the Swarm advanced data set and the electron density measurements from the multi-needle Langmuir probe (m-NLP) systems that were installed on both sounding rockets and analyzed for the whole flight duration for both rockets. Due to the high sampling rate of the m-NLP, the probes allow for a study of plasma density irregularities all the way down to kinetic scales.
A steepening of the slope in the power spectra indicates two regimes, a shallow region, where fluid-like processes are dominating, and a steeper region which can be addressed with kinetic theory. The steepening occurs at frequencies similar to that of the oxygen gyrofrequency and at an increased rate where precipitation starts. In addition, strong electron density fluctuations were found in regions poleward of the cusp, thus in regions immediately after precipitation and in regions where only little or no precipitation was detected.
Integrated power obtained from the power spectra shows very little fluctuations, and no dependence on altitude, for low frequencies.
For high frequencies, fluctuations in higher altitudes coincide with the passing through a flow channel with similar elevations for all frequency intervals, while in lower altitudes fluctuations appear mainly during precipitation, especially for frequency intervals within 100-300Hz.
Space plasmas display fluctuations and nonlinear behavior at a broad range of scales, being in most cases in a turbulent state. The majority of these plasmas are also considered to be heated, with dissipation of turbulence as a possible explanation. Despite of many studies and advances in research, many aspects of the turbulence, heating and their interaction with several space plasma phenomena (e.g., shocks, reconnection, instabilities, waves), remain to be fully understood and many questions are still open.
Plasma irregularities and turbulence are believed common in the F-region ionosphere and because of their impact on the GNSS and the human activity in the polar regions, a detailed understanding is required. This study provides a characterization of the turbulence developed inside the polar-cusp ionosphere, including features as intermittency, not extensively addressed so far.
The electron density of ICI-2 and ICI-3 sounding rocket missions have been analyzed using advanced time-series analysis techniques and a standard diagnostics for intermittent turbulence. The following parameters have been obtained: the autocorrelation function, that gives useful information about the correlation scale of the field; the energy power spectra, which show the average spectral indexes ∼ −1.7, not far from the Kolmogorov value observed at MHD scales, while a steeper power law is suggested below kinetic scales. In addition, the Probability Distribution Functions of the scale-dependent increments display a typical deviation from Gaussian that increase towards small scales due to intense field fluctuations, indication of the presence of intermittency and coherent structures. Finally, the high kurtosis and his scaling exponent reveals an efficient intermittency, usually related to the occurrence of structures.
This study strengthens the idea that that density fluctuations in the ionospheric cusp seem to agree with the turbulence framework in which intermittent processes transfer energy across different scales.Space plasmas display fluctuations and nonlinear behavior at a broad range of scales, being in most cases in a turbulent state. The majority of these plasmas are also considered to be heated, with dissipation of turbulence as a possible explanation. Despite of many studies and advances in research, many aspects of the turbulence, heating and their interaction with several space plasma phenomena (e.g., shocks, reconnection, instabilities, waves), remain to be fully understood and many questions are still open.
Plasma irregularities and turbulence are believed common in the F-region ionosphere and because of their impact on the GNSS and the human activity in the polar regions, a detailed understanding is required. This study provides a characterization of the turbulence developed inside the polar-cusp ionosphere, including features as intermittency, not extensively addressed so far.
While ensuring food security worldwide, irrigation is altering the water cycle and generating numerous environmental side effects, such as groundwater depletion and soil salinization. As detailed knowledge about location, timing and amounts of water used for irrigation over large areas is still lacking, remotely sensed soil moisture has proved to be a convenient means to fill this gap. However, the spatial resolution and/or the revisit time of satellite products represent a major limitation to accurately estimating irrigation.
In this work, we systematically and quantitatively assess the impact of the spatio-temporal resolution of soil moisture observations on the reliability of the retrieved irrigation information, i.e., timing and water amounts. Through a synthetic experiment based on soil moisture timeseries simulated by a hydrological model, we evaluate first the individual and then the combined impact of varying spatial and temporal resolution on both the detection and quantification accuracy. Furthermore, we investigate the effect of instrument noise typical of current satellite sensors, i.e., retrieval error, and irrigation rate, i.e., irrigation system and/or farmer’s decision on how much to irrigate.
Satisfactory results, both in terms of detection (F-score > 0.8) and quantification (Pearson’s R > 0.8), are found with soil moisture temporal samplings up to 3 days, or irrigated fractions as low as 30%, i.e., at least one-third of the pixel covers the irrigated field(s). Although lower spatial and temporal resolutions lead to a decrease in detection and quantification accuracy, the presence of random noise in the soil moisture timeseries has a more significant negative impact. As expected, better performances are found when higher volumes of irrigation water reach the soil. Finally, we show that current high-resolution satellite soil moisture products (e.g., from Sentinel-1) agree significantly better with model simulations forced with irrigation compared to rainfed simulations. On the other hand, coarse-scale products achieve higher correlations with soil moisture simulated without irrigation. Hence, our analysis highlights the potential for employing Sentinel-1 derived soil moisture for field-scale irrigation monitoring.
Sustainable water use in agriculture, while maintaining or increasing high yield levels, is becoming increasingly important to tackle challenges imposed by climate change and population growth. With increasing frequency and severity of drought events, competition for water resources is bound to intensify ever more. Knowing the water status of crops in the field allows to optimize water consumption by adapting water management practices to the actual water demand in the field, and make best use of limited water resources.
Crop canopy temperature (Tc) measured by thermal infrared (TIR) is an excellent indicator of crop water stress due to its close relation to relative transpiration rate and correspondingly transpiration cooling. Satellites equipped with TIR sensors can provide a cost-efficient global solution for irrigation management based on crop water stress monitoring. However, canopy temperature must be recorded with high spatial and temporal frequency, ideally daily, to accurately track crop water supply within a specific field. Current space borne TIR data products are available at high spatial or high temporal frequencies, but not both. Hydrosat is building a 16+ satellite constellation to provide high-resolution global TIR data products every day, multiple times per day. Hydrosat’s data will be a game changer in agricultural monitoring and management, making detailed sub-field-level irrigation management practical without any groundwork required for sensor installations or maintenance.
It is crucial for large-scale remote irrigation management that the stress indices used to quantify water stress produce accurate results independent of local weather conditions and without easy access to reliable ground data. Various stress metrics based on canopy temperature, e.g., canopy air temperature difference (CATD), crop water stress index (CWSI), and temperature vegetation dryness index (TVDI) have been proposed to account for varying environmental conditions and were shown to be successful in quantifying crop water stress under suitable experimental conditions. Diurnal variation in plant transpiration and mixing of vegetation and soil signals are further challenges for satellite data with discrete acquisition time and ground resolution much coarser than individual plants.
Field trials were carried out near Nelspruit in South Africa, where different crops (including maize, soybeans, potatoes, and dry beans) were studied on predominantly sandy soil in a humid but hot climate. Crop water stress indices obtained from ground TIR radiometers, handheld TIR camera images, and frequent unmanned aerial vehicle (UAV) flights were compared to extensive ground measurements including volumetric soil moisture, soil matric potential, and leaf water potential at pre-dawn and noon.
Different irrigation treatments resulted in significant yield differences, with higher and more homogeneous yield values obtained on well-irrigated plots. The spatial and temporal pattern of crop water stress under reduced irrigation was clearly resolved by thermal stress indices based on canopy temperature, with TVDI exhibiting highest sensitivity to water stress. However, while TVDI is sensitive to moderate and severe levels of water stress, well-watered crops show no significant difference from crops experiencing mild water stress. Using the Penman-Monteith formalism to calculate reference evapotranspiration (ET0), we estimated crop water demand following the FAO double crop coefficient approach using TVDI as scaling factor for crop transpiration and soil evaporation. A soil water balance calculation based on a single-layer water bucket model was able to reproduce the experimental water content curve very well (R2 > 0.9), with largest discrepancies occurring after heavy rainfall due to an underestimation of runoff. Results from a currently ongoing field trial using these insights for real-time irrigation management will be presented as well.
Consistent with previous studies, TVDI was most sensitive to water stress during sunny hours around noon. Under these conditions corresponding to high incident net radiation, the dry and hot soil surface layer significantly affects the effective TIR emission of mixed soil/vegetation pixels in spatially resolved TIR data. In principle, TVDI was defined to account for variation in fractional vegetation cover (FVC), but a simple trapezoidal model was insufficient in this experiment to separate soil and vegetation contributions during early growth stages with incomplete canopy closure. Furthermore, we found that the FVC-Tc parameter space, and therefore TVDI, is strongly affected by ground sampling distance and spatial misalignment between the spectral bands used to derive FVC and Tc. Consequently, care must be taken when upscaling crop water stress indices validated on field-scale to global satellite observation.
Applying the same crop water stress and soil water balance models to Landsat-8 scenes acquired over selected areas in the central United States of America, we were able to adequately quantify actual evapotranspiration and soil water balance using only normalized difference vegetation index (NDVI) and land surface temperature (LST), as well as weather data for soil water balance calculations.
Among the human activities altering the natural circulation of water on the Earth’s surface, irrigation is the most impactful one. In the near future, water exploitation aimed at improving food production through irrigation practices is expected to further increase to face population growth and rising living standards under a climate change scenario. Nevertheless, a detailed knowledge of irrigation dynamics (i.e., extents, timing, and amounts) is generally lacking worldwide. This open problem is the main driver of the European Space Agency’s (ESA) Irrigation+ project, whose main goals are: (i) the development of methods and algorithms to detect, map, and quantify irrigation at different spatial scales, (ii) the production of satellite-derived irrigation products, and (iii) the assessment of the impacts of irrigation on society and science.
This study presents a comparison between two different methodologies, developed within the Irrigation+ project, aimed at estimating irrigation water amounts over different test sites: a satellite-based approach, only relying on satellite data, and a data assimilation approach, developed within the NASA Land Information System (LIS) framework, which integrates land surface modelling and remote sensing (hereafter model-based approach). The satellite-based method, namely the Soil-Moisture-based (henceforth SM-based) inversion approach, relies on the backwards estimation of irrigation rates by exploiting satellite soil moisture and evapotranspiration data as an input. The method has been implemented with two Sentinel-1-derived soil moisture products: RT1 Sentinel-1, whose spatial resolution is 1 km, and S2MP, produced at a plot scale by merging Sentinel-1 Synthetic Aperture Radar (SAR) data with Sentinel-2 observations. On the other hand, the model-based approach investigates possible benefits, in terms of irrigation quantification, provided by the assimilation of 1-km spatial resolution Sentinel-1 backscatter into the Noah-MP land surface model coupled with an irrigation scheme. The assimilation updates soil moisture and vegetation via a calibrated backscatter forward operator which is represented by a Water Cloud Model.
Results shed a light over the irrigation information contained in the high-resolution Sentinel-1 soil moisture products. In addition, the comparison of different methodologies is useful to understand the limits and potential of each, thus highlighting the existing gaps to be filled in order to obtain reliable irrigation estimates.
Irrigation in olive groves reduces water stress and allows trees to absorb more nutrients, which increases olive productivity. To improve yield through water volume rationing, farmers often apply a Regular Deficit Irrigation (RDI) throughout the growing season. RDI is used to apply a percentage (typically between 70% and 90%) of a crops’ evapotranspiration during all or a part of the irrigation season. A tool to accurately estimate evapotranspiration in olive groves can drive RDI strategies, optimize effective irrigation management, and improve yield in relation to the applied water volume. Benchmark systems for monitoring evapotranspiration routinely, such as the eddy covariance approach, are not feasible due to their expense and limited spatial extent. Obtaining spatial information from satellites is an ideal solution to overcome in-situ sensor limitations, as they can provide imagery at both high spatial and temporal resolution. Optical remote sensing has been widely used to estimate the evapotranspiration via the use of vegetation indices. However, optical sensors are not usable in the presence of clouds, and the spatial resolution of non-commercial optical sensors is generally insufficient for resolving within field variations, individual trees or even hedgerows in orchards. Such limitations can be overcome through using high-resolution Synthetic Aperture Radar (SAR) data. Here, we evaluate the potential of SAR data to provide information suitable for irrigation management in olive groves. The analysis focused on an irrigated olive plantation located in Saudi Arabia. For each olive plot, Sentinel-1 SAR backscatter (C-band) at a given acquisition date was computed as the average of the pixel values located in that plot. For this analysis, the SAR backscatter was compared to simultaneous evapotranspiration measurements. Results demonstrate a correlation between SAR backscatter and evapotranspiration with a coefficient of determination of 0.80. Overall, the study demonstrates that SAR data can track variation in locally measured evapotranspiration and illustrate their potential to be used for irrigation management.
Sensitivity Analysis of the C and L bands SAR Data for Detecting Irrigation Events
Hassan Bazzi 1, Nicolas Baghdadi 1, Mehrez Zribi 2 and François Charron 3
1 INRAE, UMR TETIS, University of Montpellier, AgroParisTech, 500 rue François Breton, CEDEX 5, 34093 Montpellier, France; nicolas.baghdadi@teledetection.fr (N.B.)
2 CESBIO (CNRS/UPS/IRD/CNES/INRAE), 18 av. Edouard Belin, bpi 2801, CEDEX 9, 31401 Toulouse, France; mehrez.zribi@ird.fr (M.Z.)
3 G-EAU Unit, University of Montpellier, AgroParisTech, CIRAD, INRAE, Institut Agro, IRD, Domaine du Merle, 13300 Salon de Provence, France; francois.charron@supagro.fr
For better management of water resources and water consumption, the detection of the irrigation timing at the agricultural plot scale is of great importance. With the availability of the Sentinel-1 (S1) SAR data in free and open access mode at 6 days revisit time, several studies have demonstrated the potential of the S1 data for detecting irrigation events (Bazzi et al., 2020a; Le Page et al., 2020). Current irrigation detection models built using S1 SAR data are based on the monitoring of the increase of the SAR backscattering signal between two consecutive SAR acquisitions (increase in soil moisture). However, the detection of irrigation events based on the soil moisture-SAR correlation using the C-band S1 data (wavelength ~ 6cm) is sometimes limited to the penetration capability of the C-band over some developed vegetation cover. For certain high vegetation cover, especially for some cereal crops such as wheat and barley, several studies have demonstrated that the sensitivity of the C-band SAR signal to soil moisture values becomes negligible when the vegetation is well-developed (El Hajj et al., 2018; Joseph et al., 2010; Nasrallah et al., 2019). To overcome the penetration limitation of the C-band, SAR data with a higher wavelength could be required such as the L-band (wavelength ~24 cm) SAR data.
The objective of this study is to compare the sensitivity of the S1 C-band and the ALOS-2 L-band images for irrigation detection over well-developed vegetation cover. A sensitivity analysis of both C and L bands for irrigation detection was performed over 45 references irrigated grassland plots located in the Crau plain of southeast France during two growth cycles. The first growth cycle is rich in grasses (coarse hay) and resembles wheat crops and the second grass cycle is richer in legumes with less percentage of coarse hay. The Normalized Difference Vegetation Index (NDVI) is used in the analysis to describe the vegetation cover.
In order to understand the capability of detecting irrigation events using the C-band SAR data, we studied first the response of the S1 backscattering signal (σ_C^0) following irrigation events by examining the temporal evolution of the σ_C^0 according to rainfall and irrigation events.
Two L-band images each acquired in each growth cycle were used to compare between the C and L bands sensitivity for irrigation detection. We analyze the relationship between the σ_C^0 and L-band backscattering coefficients (σ_L^0) as a function of the time difference between the SAR acquisition date and the irrigation date (∆t). The parameter ∆t is considered to be a proxy measure of the soil dryness-wetness.
The results showed that when the vegetation cover develops in the first growth cycle (coarse hay), the response of the σ_C^0 to the water supplements (either irrigation or rainfall) becomes negligible and the irrigation events could be hardly detected. The σ_C^0 in the first growth cycle shows no correlation with the ∆t value which means that the σ_C^0 are not sensitive to the dryness-wetness of the soil. The behavior of the σ_L^0 as a function of the time difference between the image acquisition date and the irrigation date in the first growth cycle indicates that in the presence of either low or high developed vegetation cover, the σ_L^0 in HH polarization could be still sensitive to the soil water content. Regardless of the NDVI values, wet soil conditions due to irrigation on the same day of the L-band acquisition induce high σ_L^0 (around -11 dB). In contrast, dry soil due to the absence of irrigation 5 to 6 days before the ALOS-2 acquisition shows low σ_L^0 values (around -15 dB).
In the second growth cycle of grass (rich in legumes), both C and L bands are sensitive to the soil moisture values in the presence of either high or low vegetation cover. In L-band, the σ_L^0 values decrease from -13 dB when the irrigation is on the same day of the ALOS-2 acquisition to less than -17 dB when the irrigation is 15 days before the ALOS-2 acquisition. Similar behavior with less tendency was observed for the C-band. Results showed that L-band is more sensitive than the C-band and the HH polarization in L-band is more sensitive than the HV polarization for detecting irrigation events.
References
Bazzi, H., Baghdadi, N., Fayad, I., Zribi, M., Belhouchette, H., and Demarez, V. (2020a). Near Real-Time Irrigation Detection at Plot Scale Using Sentinel-1 Data. Remote Sensing 12, 1456.
Bazzi, H., Baghdadi, N., Fayad, I., Charron, F., Zribi, M., and Belhouchette, H. (2020b). Irrigation Events Detection over Intensively Irrigated Grassland Plots Using Sentinel-1 Data. Remote Sensing 12, 4058.
El Hajj, M., Baghdadi, N., Bazzi, H., and Zribi, M. (2018). Penetration Analysis of SAR Signals in the C and L Bands for Wheat, Maize, and Grasslands. Remote Sensing 11, 31.
Joseph, A.T., van der Velde, R., O’Neill, P.E., Lang, R., and Gish, T. (2010). Effects of corn on C- and L-band radar backscatter: A correction method for soil moisture retrieval. Remote Sensing of Environment 114, 2417–2430.
Le Page, M., Jarlan, L., El Hajj, M.M., Zribi, M., Baghdadi, N., and Boone, A. (2020). Potential for the Detection of Irrigation Events on Maize Plots Using Sentinel-1 Soil Moisture Products. Remote Sensing 12, 1621.
Nasrallah, A., Baghdadi, N., El Hajj, M., Darwish, T., Belhouchette, H., Faour, G., Darwich, S., and Mhawej, M. (2019). Sentinel-1 Data for Winter Wheat Phenology Monitoring and Mapping. Remote Sensing 11, 2228.
Satellite sensors have been promoted widely as a technology to optimize water and crop productivity in agriculture. Different remote sensing technologies are able to detect crop stress and water shortages, with a special emphasis on water stress. Improving water and crop productivity goes hand in hand. Moreover, crop stress resulting from waterlogging leads to suboptimal crop productivity, however, has so far received little attention in literature and, consequently, technological development. This is surprising because approximately twenty percent of the global agricultural land suffers from the consequences of waterlogging and secondary soil salinization. While irrigation is expected to increase productivity, excess water can hamper the crop growth and decrease water use efficiency.
In this study we focus on an irrigated sugarcane plantation in southern Mozambique burdened by waterlogging. We show how Sentinel-1 backscatter and Planet NDVI can be used to monitor sugarcane development. Our results demonstrate Sentinel-1 backscatter is influenced by sucrose accumulation and can be used to predict sucrose yield early in the season. In addition, we demonstrate how poor sucrose development is linked to waterlogging and when additional irrigation application is counterproductive. To test the usefulness of these findings the next step will be to integrate the methodology into the decision making framework of the plantation and continue validation of the work done so far.
The ESA Arctic Weather Satellite (AWS) Programme was approved at the ESA Council Meeting at Ministerial level in Seville, Spain in 2019. ESA initiated mission preparation activities with industry in early 2020 and these activities were concluded in late 2020. In February 2021 ESA kicked–off Phase B/C/D/E contract with OHB Sweden as the prime contractor, responsible of the development of the AWS Space Segment and Ground Segment. The OHB industrial team will also be responsible of the operations of the AWS prototype flight model (PFM) satellite. The launch of the AWS PFM satellite is planned in 2024.
The ESA AWS programme is conceived to develop the prototype and in-orbit demonstration satellite for a future constellation of AWS satellites providing all-weather microwave sounding of the global atmosphere with frequent revisit observations over the Arctic region. With the increased recognition of the significance of the polar regions with respect to climate change and the increased economic and research activities occurring in the Arctic, such observations have assumed an increased importance. AWS Constellation will be designed to complement the existing microwave sounders of MetOp-SG and NOAA satellites, improving nowcasting and NWP on a global scale. Consequently, the AWS constellation orbits will be selected to maximise the complementarity. The AWS constellation would provide humidity and temperature sounding data products with unprecedented revisit times and data timeliness. It is envisaged that such a future constellation would be implemented in cooperation with EUMETSAT.
The AWS prototype satellite is designed and qualified taking into account the AWS constellation requirements so that no design changes are foreseen from the PFM Satellite to the constellation Satellites (e.g. covering different orbital planes with different equatorial crossing times).
AWS instrument is a traditional cross-track scanning microwave radiometer, providing a total of 19 channels from 50 GHz up to 325 GHz. AWS provides humidity and temperature profiles and it will also contribute to precipitation measurements. Due to the lower orbit altitude (around 600km), the spatial resolution of the humidity sounding measurements will be improved compared to other operational sounders.
AWS provides two services to end-users. A Stored Mission Data (SMD), consists of one full orbit of data, and is downlinked to Ground station in Svalbard once per orbit and data is disseminated using EUMETSAT’s EUMETCast system. A Direct Data Broadcast (DDB) provides AWS real time data from the instrument. The DDB data is provided using L-band data downlink and the link is always ON allowing everyone with a ground station to downlink the data. Level1b processor will be made available to end users allowing to process the raw instrument data into Level1b product (calibrated and geolocated radiances).
This paper will present the Arctic Weather Satellite Programme, its current status, Space Segment and Ground Segment Designs, AWS services, including data products and the overall planning of the AWS constellation preparation.
The European Space Agency (ESA) Arctic Weather Satellite (AWS) programme is currently on going with OHB Sweden as the prime contractor responsible of the development of the AWS Space Segment and Ground Segment. The AWS prototype satellite launch is planned in 2024, hosting a 19-channels cross-track scanning microwave radiometer, covering frequencies from the microwave to the sub-mm wave range.
EUMETSAT and ESA share an interest in small satellites equipped with microwave sounders complementing the EUMETSAT Polar System Second Generation (EPS-SG) Microwave Sounding mission supported by the Microwave Sounder (MWS). A successful outcome of the AWS in-orbit demonstration in 2024-2025 time frame would represent an opportunity for EUMETSAT to expand the products’ envelope of the EPS-SG mission for the users, by implementing a constellation of satellites with microwave sounding capability supporting global and regional numerical weather prediction (NWP) applications. This constellation would be consistent with the vision for WIGOS in 2040 and the EUMETSAT Strategy for 2030. EUMETSAT and ESA have been cooperating on AWS since 2020 with the aim to prepare for a possible future constellation, which would be flying recurrent models of the AWS prototype. The Phase 0/A activities for the constellation definition are currently ongoing at EUMETSAT.
This presentation provides an overview of the status of phase 0/A activities for a potential future constellation based on AWS. The EUMETSAT AWS constellation would be developed in cooperation with ESA. This mission is expected to operationally provide information on global humidity and temperature profiles by delivering sounding data in near real time to the users with unprecedented revisit time.
The presentation will also highlight the main drivers of the system architecture taking into consideration the draft End User Requirements, which are under preparation. It will describe the current assumptions regarding the constellation architecture: i.e. number of satellites, orbits and crossing time at the equator and expected coverage along with replenishment and deployment strategy. It will provide an outlook of the logic and planning for the approval of the mission, with an overview of the relevant scientific studies aimed at demonstrating the impact of such a constellation.
The European Space Agency’s (ESA) Aeolus satellite was launched on 22 August 2018 from Centre Spatial Guyanais in Kourou, French Guyana. The Aeolus data has been extensively analysed by a number of meteorological centres and found to have a positive impact on NWP forecasts, particularly in the tropics and polar regions. These positive results, along with the successful in-orbit demonstration of the measurement concept and associated technologies utilised on Aeolus, resulted in a statement of interest from EUMETSAT in a future, operational DWL mission in the 2030 to mid-2040’s timeframe and a request to ESA to carry out the necessary pre-development activities for such a mission. This paper will describe the current status of instrument pre-development activities that are being performed in the frame of a potential Aeolus Follow-On mission (Aeolus-2) and ESA’s plans for such a mission. The main inputs for a future Doppler Wind Lidar (DWL) instrument that have been used are: lessons learned from the Aeolus development phases and the in-orbit operations and performance; initial inputs from EUMETSAT including a mission lifetime higher than 10 years utilizing 2 spacecraft (implying a lifetime of 5.5 years for each) with a launch of the first satellite in 2030, increased robustness and operability of the instrument, and an emphasis on reduction of recurrent costs; the maximum utilisation of the demonstrated design heritage; and a number of recommendations for the requirements of a future DWL mission from the Aeolus Scientific Advisory Group (ASAG). These inputs have been collated and combined into a set of preliminary requirements which have been used as the basis for a dedicated Instrument Consolidation Study. An extensive review and trade-off of the above inputs by Airbus Defence & Space, ESA, and independent experts, resulted in the decision to baseline a bi-static instrument design. The various trade-offs that led to the choice of bi-static design are discussed. In addition, three instrument subsystem pre-development activities are currently running: two laser transmitter pre-developments and the pre-development of an improved detector. These developments have the aim to demonstrate that issues identified from the above are resolved and that the technology levels are sufficiently mature for the follow-on Aeolus-2 mission. The status of these pre-developments will be summarised and presented together with ESA’s plans for an operational Aeolus-2 mission.
This presentation introduces the EUMETSAT activities on a possible operational Doppler Wind LIDAR (DWL) mission based on the Aeolus-2 instrument & spacecraft currently under development by ESA.
This new DWL capability is intended as an expansion of the EUMETSAT Polar System Second Generation (EPS-SG) programme.
Originally identified as one of 20 candidate observation missions by the 2008-2009 EPS-SG User consultations, it was eventually not down-selected at the time as part of the Metop-SG payload complement mainly due to its low maturity.
Since then, the picture significantly evolved thanks to the 2018 launch of the ESA-Aeolus mission, demonstrating the maturity of space based DWL concept, as well as by showing significant and substantial beneficial impact in global NWP models as reflected by the operational assimilation of its data by several major European NWP centres since 2019.
On those bases, EUMETSAT and ESA agreed to establish a joint study roadmap on a possible operational DWL mission and to, in parallel, coordinate the assessment of the impact of Aeolus measurements on NWP.
In 2020, while ESA started a series of technology pre-developments and instrument & satellite studies, EUMETSAT initiated system phase 0/A activities with the objectives to assess the integration of such operational mission in its operational framework and begin the formulation of high-level mission, system & ground segment requirements.
By now, EUMETSAT and ESA jointly compiled an initial draft End User’s Requirement Document (EURD) based on mission observational requirements proposed by the Aeolus SAG (ASAG) directly derived from the Aeolus mission requirements.
The EUMETSAT system and ground segment architectural definition is also progressing with the objective to maximize the reuse of EUMETSAT assets to allow for a cost-effective mission.
As for the Aeolus data impact assessment on NWP, it is now ongoing, supported by a series of scientific studies and workshops.
Finally, EUMETSAT and ESA are also closely coordinating the preparation of their respective programme proposals while defining possible cooperation framework.
The Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) is expected to deliver key information to support the EU policies on the management of natural resources. Once in orbit, CHIME mission is foreseen to provide systematic VNIR-SWIR hyperspectral images with high radiometric accuracy at high spatial resolution. In the framework of the CHIME preparatory activities, ESA established a collaboration with NASA/JPL, University of Zurich and Italian Space Agency (ASI) to acquire hyperspectral data with the Next Generation Airborne Visible Infrared Imaging Spectrometer (AVIRIS-NG) over several representative test site in Europe. The aim of this collaboration was the acquisition of PRISMA hyperspectral data and in situ measurements, synchronously to AVIRIS-NG overpasses, in order to simulate CHIME-like data and test some of the CHIME High Priority Prototype Product retrievals, as well as to support collaboration and synergies with current and planned hyperspectral missions.
On some sites, simultaneous PRISMA and AVIRIS-NG acquisitions were made coupling remote sensing with in-situ observations, both spectral reference measurements and bio-physical variables. Match-ups between the L2 products (surface reflectance) for the two sensors regarding imagery acquired in June 2021 over 4 CAL/VAL sites in Italy - Jolanda di Savoia (44.85N; 11.94E) and Braccagni (42.83N; 11.07E) (Cropland), Venezia (45.35N; 12.44E) (Coastal sea water) and Lago Trasimeno (43.12N; 12.13E) (Freshwaters) - and reference in-situ measurements are considered in this study. The high reliability of AVIRIS data, coupled with the large surface covered, enabled to substantially increase the statistical representativeness of the match-ups.
The comparison provided an evaluation of the quality of L2D PRISMA product on different surfaces. In Croplands, it was possible to highlight differences in discrepancies related to land use: minor discrepancies in the VIS-NIR regions on vegetation and a much larger difference for bare soils; higher discrepancies for wavelength greater than 2300nm. On Coastal and Freshwaters, the comparison of PRISMA products, both L1 and L2, with AVIRIS-NG is well promising as spectra are corresponding well for the whole VNIR range. Overall the best agreement is for turbid and shallow waters; in clearer waters some divergence were found although mostly related to the different illumination-viewing geometries between PRISMA and AVIRIS-NG.
Despite the focus of this study is on PRISMA, it might be relevant to note that for some cases (e.g. Freshwater, Cropland) few match-ups between AVIRIS-NG and PRISMA are also including DESIS which is further contributing to CHIME development.
Monitoring Non-Photosynthetically active Vegetation (NPV), as it plays an important role in the cycling of carbon, nutrients and water, is relevant to different studies including ecosystem dynamics, climate change, ecology, and hydrology, and hence is a topic of interest for remote sensing environmental applications.
In croplands NPV represents a key information in the field of sustainable agriculture, given that the crop residue (CR) management affects the agri-ecological functions of soil. A proposed conservation agropractice is to leave CR in field and perform minimum tillage.
In the perspective of monitoring CR presence and management from farm to regional scales, two main information are requested: i) recognition of spatial distribution of different land surface conditions (soil, vegetation and NPV both from CR and dead standing vegetation) at parcel level and ii) the characterisation of NPV classes in terms of abundance of carbon base constituent (CBC) on surface unit.
Some preliminary studies with PRISMA proved that the lignin-cellulose absorption band centered at 2100nm is apparent in such data and it is reliable for the detection of NPV, besides it is promising for the characterisation of CR abundance by spectral modelling (Pepe et al. 2020).
Given that the requirements for assessing crop residue cover and soil tillage activity (when) and intensity (which type) are: i) an accurate land use map (Daughtry et al. 2005); ii) the knowledge of surface conditions changes as related to the timing of tillage or planting (Zheng et al. 2012); a classification paradigm is proposed to map PRISMA data in terms of five different surface status categories: bare soil, crop residue, vegetation at emergence (including plant regrowing on crop residues), crop in vegetative stage (green vegetation) and senescence phase (dead (dying) standing vegetation).
To this purpose, the method previously proposed by (Pepe et al. 2020) is improved by extending the analysis to spectral intervals other than that of lignin-cellulose, including those of leaf chlorophyll pigments (centered around 690 nm) and water content (centered around 1200 nm). Such absorption bands, representing diagnostic features to assess the presence of the different surface category, are modelled by the Exponential Gaussian Optimization method (Pompilio et al. 2009, 2010). Parameters extracted from PRISMA spectra, from a supervised training set, are used for inferring classification rules using a decision tree approach. The training set comes from a reference imagery for which information on ground conditions from an intensive field campaign are available for the study area corresponding to a large farm (3800ha) in Jolanda di Savoia, Northern Italy.
The classification paradigm (spectral modelling and decision tree) is run at pixel level, afterwards the results are post-processed to obtain a final map at parcel level (which is the extent of interest).
The mapping approach is applied to a set of images acquired during two crop seasons (2019-2020 and 2020-2021) over the study area as the site belongs to the network of the PRISMA mission cal/val project (PRISCAV). A total of 12 images (6 per crop season), were available for the experiment.
The performance of the approach is quantitatively assessed by traditional statistics for the image where ground reference data exists and qualitatively evaluated in terms of crop conditions trajectories derived from time series analysis and compared to crop map information and management knowledge of the estate.
Results proved the method to be viable and reliable for identifying practices related to land management able to recognise existence and periods of crop residue presence and confirmed that PRISMA hyperspectral data are promising for monitoring and verification actions on the implementation of conservation agriculture. Moreover, even if the mission is not intended to be operative, the revisit time and tasking characteristics of PRISMA, seems actually not optimal but sufficient, to provide a number of cloud-free images during planting season useful for monitoring CR. Such results are also important in the perspective of the new and forthcoming operational hyperspectral missions such as DLR-ENMAP and ESA-CHIME.
Future studies will be devoted to evaluate the reliability and consistency of these spectroscopic approaches in the characterisation of CR abundance.
Imaging spectroscopy (IS) is a powerful tool for monitoring Earth surface properties. IS can provide important information on the state and dynamics of the global cryosphere. In particular, these data allow the retrieval of several physical properties of the surface such as albedo, snow/ice grain size, liquid water content, and concentration of light-absorbing particles (e.g., mineral dust, black carbon, cryospheric algae). The recent launch of the PRISMA mission (April 2019) opened interesting perspectives for the quantitative estimation of snow and ice properties from satellite IS data. In this contribution, we present results from research activities aimed at evaluating the quality of PRISMA products (L1 as Top-Of-Atmosphere Radiance, and L2D as Surface Reflectance) for studying the cryosphere. Furthermore, we present some preliminary results for the retrieval of snow and ice parameters in polar areas.
The calibration and validation activities were accomplished in predefined periods, which were representative of the surface condition during the season, i.e. fresh and aged snow. The study was performed in the European Alps making use of two test sites located at different altitudes: Torgnon (2160 m) and Plateau Rosa (3500 m). Field reflectance measurements were collected in both sites using the Spectral Evolution spectrometer, which operates in the 300-2500 nm wavelength domain. Atmospheric Aerosol Optical Thickness (AOT) was measured as well. At the Torgnon site, an automatic system for continuous monitoring of spectral reflectance in the visible to near-infrared (VNIR) provided additional field data to compare with PRISMA observations. Field spectra were propagated to Top-Of-Atmosphere with MODTRAN and then compared with L1 PRISMA products. L2 PRISMA products were validated by using a direct comparison with both field data and by an additional intercomparison with a different retrieval method based on Optimal Estimation and Sentinel-2 data. The agreement between the in situ measurements and satellite data is generally good, and for both L1 and L2 products the mean absolute difference is around 5%. Underestimation of radiance and reflectance for wavelengths below 500nm was observed both for fresh and aged snow.
A further preliminary analysis was also conducted regarding the retrieval of snow and ice parameters in polar glaciers, where we tested two different algorithms on rather flat areas. In particular, we analysed a PRISMA scene acquired in August 2020 over the “k-transect” (South-West Greenland), and another scene acquired in December 2020 over the Nansen Ice Sheet (East Antarctica). In both cases, we obtained reliable estimation of snow and ice parameters such as albedo, grain effective radius, liquid water content, and concentration of impurities and algae. Although the preliminary results are encouraging, further analyses are needed to validate these retrievals with field data.
Wildfires are natural phenomenon which both influence and are influenced by climate change. In the last few decades Earth Observation (EO) satellite have been used to analyze many fire characteristics, including: fire temperature, fire radiative power (FRP), smoke composition and vegetation mortality.
Active detection and monitoring of risk areas is becoming increasingly important to counteract severe and destructive landscape fires. Satellite-based remote sensing (RS) represents acost-effective way to detect, map, and investigate wildfires (Barmpoutis et al., 2020). EO satellites operating in the middle infrared (MIR) and thermal infrared (TIR) spectral band such as AVHRR, Modis, Sentinel 3, Landsat among many others are used to generate operational products (i.e., active fire detection, FRP).
Fire detection products based on hyperspectral (HS) EO satellite are challenged by their general lack of MIR and TIR spectral ranges, the limited revisit time, and limited task availability. However, HS imagery provides unique attributes in support to fire detection (Veraverbeke et al., 2018). Indeed, previous results based on EO-1 Hyperion have shown the HS potentialities for RS applications (Waigl et al., 2019, Amici S. and Piscini A. 2021). With the new PRISMA mission, and many similar ones to come soon, hyperspectral instruments can complement long-wave information useful to characterizing the whole continuum of landscape fire (pre- fire, active and post fire). This work will investigate how PRISMA HS images can be used to support fire detection and related crisis management. Here we present how different detection techniques (indexes and AI-based) have been tested on landscape fires observed by PRISMA.
First, we will start with a descriptive analysis of collected PRISMA images containing wildfires. This phase also leads to the identification of Hyperspectral Fire index for PRISMA (HFDI) and the definition of classification classes, e.g., hot spots, smoke plumes, burnt areas, and healthy vegetation (Griffin et al., 2000). Second, we will describe how deep learning classification models can be designed to perform semantic segmentation of input HS data, where an output image with metadata will be associated to each pixel of the input image. Finally, an estimation of the temperature is carried out by using a linear mixture model and evaluating the temperature of the emitting sources (i.e., the landscape fires) with a least square approach. A critical comparison of the retrieved temperature against ECOSTRESS and Landsat sensors is realized.
REFERENCES
Amici S. and Piscini A., 2021. Exploring PRISMA Scene for Fire Detection: Case Study of 2019 Bushfires in Ben Halls Gap National Park, NSW, Australia, MDPI Remote Sensing, 2021, 13(8), 1410; https://doi.org/10.3390/rs13081410.
Barmpoutis, P., Papaioannou, P., Dimitropoulos, K., Grammalidis, N., 2020. A review on early forest fire detection systems using optical remote sensing. Sensors (Switzerland).
Dennison, P. E., Charoensiri, K., Roberts, D. A., Peterson, S. H., Green, R. O., 2006. Wildfire temperature and land cover modeling using hyperspectral data. Remote Sensing of Environment.
Griffin, M. K., Hsu, S. M., Burke, H. h. K., Snow, J. W., 2000. Characterization and delineation of plumes, clouds, and fires in hyperspectral images. International Geoscience and Remote Sensing Symposium (IGARSS).
Guarini, R., Loizzo, R., Facchinetti, C., Longo, F., Ponticelli, B., Faraci, M., Dami, M., Cosi, M., Amoruso, L., De Pasquale, V., Taggio, N., Santoro, F., Colandrea, P., Miotti, E., Di Nicolantonio, W., 2018. PRISMA hyperspectral mission products. International Geoscience and Remote Sensing Symposium (IGARSS).
Loizzo, R., Daraio, M., Guarini, R., Longo, F., Lorusso, R., DIni, L., Lopinto, E., 2019. Prisma Mission Status and Perspective. International Geoscience and Remote Sensing Symposium (IGARSS).
Piscini, A., Amici, S., 2015. Fire detection from hyperspectral data using neural network approach. Remote Sensing for Agriculture, Ecosystems, and Hydrology XVII.
Spiller, D., Ansalone, L., Amici, S., Piscini, A., Mathieu, P. P., 2021. “Analysis and Detection of Wildfires by Using Prisma Hyperspectral Imagery”, ISPRS 2021.
Veraverbeke, S., Dennison, P., Gitas, I., Hulley, G., Kalashnikova, O., Katagis, T., Kuai, L., Meng, R., Roberts, D., Stavros, N., 2018. Hyperspectral remote sensing of fire: State-of-the-art and future perspectives.
Waigl, C. F., Prakash, A., Stuefer, M., Verbyla, D., Dennison, P., 2019. Fire detection and temperature retrieval using EO-1 Hyperion data over selected Alaskan boreal forest fires. International Journal of Applied Earth Observation and Geoinformation.
Waste management is nowadays considered as an important indicator of sustainable development closely intertwined with many interdependent and cross-border issues.
In Italy, a significant part of waste is still disposed of in landfills as an undifferentiated component. Therefore, the monitoring of landfills in support of their management and planning activities is of great importance. In fact, the European Directive 31/1999 / EC and the Italian law 36/2003 require long-term monitoring of various parameters (air, soil, water) from the opening of the landfill to the post-closure control period. In this perspective, it has already been amply demonstrated in previous projects and studies [1], [2], [3], [4], [5], [6], [7] and [8] that remote sensing can be useful in monitoring the impact, if present, of landfills on the surrounding environment, through the collection of remotely sensed information useful for the identification and classification of potentially contaminated areas using non-invasive methods.
For example, biogas leaks or leachate can produce effects at different spatial and temporal scales that require careful analysis to correctly interpret local environmental dynamics.
Many researchers have explored the possibilities offered by remote sensing in environmental analysis, in particular for:
- the classification and estimate of the quantity of waste stored in landfills;
- the identification of appropriate sites (geology and hydrology studies, waste transport, urban displacement planning);
- in situ management of the landfill (support for operations);
- monitoring the evolution of the landfill over time (compliance with procedures and regulations, prevention of pollution risk) [8];
- the identification of unauthorized landfills;
- identify biogas emissions [5];
- the estimate of leachate not captured [6];
- estimation of the generation and deposition of dust [7].
The main objective of the CLEAR-UP project (funded by ASI) concerns the use of PRISMA hyperspectral images for the study, development and implementation of indicators of the presence of pollutants in the soil and in the air close to areas affected by the presence of landfills. The availability of PRISMA hyperspectral images, with the limitations related to spatial resolution, makes it possible in principle to achieve this goal with unprecedented accuracy. The objective of the study concerns the possibility of:
- Detecting the presence of heavy metals in the soils in the area next to landfills;
- Identifying potentially harmful emissions (CH4, CO2, NOX) caused by spontaneous combustion and/or due to malicious behavior against the material in the landfill;
- Identifying the presence of stress conditions affecting the vegetation close to the area of the landfill;
- Determining the extent of the area possibly affected by the presence of the landfill.
References
1) Nas, B., Cay, T., İşcan, F., and Berktay, A., Selection of MSW landfill site for Konya, Turkey using GIS and multi-criteria evaluation. Environmental Monitoring and Assessment, 160(1-4), 491-500, 2010.
2) Ottavianelli, G., Synthetic Aperture Radar remote sensing for landfill monitoring. Ph.D. Thesis, Cranfield University, United Kingdom, pp. 298, 2007.
3) Schrapp, K. and Al-Mutairi, N., Associated health effects among residences near Jeleeb Al-Shuyoukh landfill. American Journal of Environmental Sciences, 6(2), pp. 184–190, 2010.
4) Shaker, A., Faisal, K., El-Ashmawy, N., and Yan, W.Y., Effectiveness of using remote sensing techniques in monitoring landfill sites using multi-temporal Landsat satellite data. Al-Azhar University Engineering Journal, 5(1), pp. 542-551, 2010.
5) Manzo, C., Studio di nuove tecnologie applicate alla individuazione e caratterizzazione delle emissioni di biogas da discarica. (PhD. Thesis. ed. Siena). Scuola Superiore Santa Chiara, University of Siena 2012, http://dx.doi.org/10.13140/2.1. 2433.5686
6) Slonecker, T., Fisher, G.B., Aiello, D.P., Haack, B., Visible and infrared remote imaging of hazardous waste: a review. Remote Sens. 2 (11), 2474–2508, 2010.
7) Stefanov, W. L., Ramsey, M.S., Christensen, P.R., Identification of fugitive dust generation, transport, and deposition areas using remote sensing. Environ. Eng. Geosci. 9(2), 151–165, 2003.
8) E.G. Cadau; C. Putignano; G. Laneve; R. Aurigemma; V. Pisacane; S. Muto; A. Tesseri; F. Battazza: Optical and SAR data synergistic use for landfill detection and monitoring. The SIMDEO project: Methods, products and results. – IGARSS – 13-18 July 2014, Quebec City.
Hyperspectral data, providing reflectance from visible to shortwave infrared wavelength, can greatly contribute to the retrieval of biophysical and biochemical vegetation traits, which are of high relevance for agricultural and ecological applications. In the framework of the Italian Space Agency project “Sviluppo di Prodotti Iperspettrali Prototipali Evoluti” (Contract ASI N. 2021-7-I.0), a prototype processor has been developed, exploiting PRISMA (PRecursore IperSpettrale della Missione Applicativa) imagery, for quantifying parameters such as Leaf Area Index (LAI), Fraction of Absorbed Photosyntetically Active Radiation (FAPAR), Fractional Vegetation Cover (FCOVER), Chlorophyll-a and Chlorophyll-b (Cab) useful for vegetation characterization.
In the scientific literature, the retrieval methods of vegetation traits are categorized into four groups: parametric regression, non-parametric regression, physically-based (including inversion of Radiative Transfer Models – RTMs – using numerical optimization and Look Up Tables –LUT- approaches), and hybrid regression methods. We have developed a hybrid method that invert of physical models through machine learning (ML) regression algorithms. In our method, the physical models, based on PROSAIL, and relating the vegetation physical parameters to the bottom of atmosphere reflectance, are used to generate simulated plant canopy spectral reflectances (from 400 to 2500 nm at 1 nm spectral resolution). Such simulated data, resampled to the PRISMA band configuration, are used to train the ML regression model.
A contamination with noise has been considered in order to improve the generalization capability of the models. In addition, a subspace of the feature space has been selected by means of dimensionality reduction techniques like PCA (Principal Component Analysis), in order to avoid correlated information that may result in suboptimal performances. Different machine learning algorithms, such as Random Forest, Support Vector Machine, Gaussian Process and Artificial Neural Network, have been evaluated and tested for the regression task.
The proposed approach allows retrieving vegetation indicators with lower computational time than other methodologies presented in the literature. In addition, it has a high power of generalization, thanks to the high representativeness of the training dataset, which has been generated taking into account different combinations of vegetation parameters and illumination/acquisition geometry configurations.
In order to demonstrate the capabilities of the prototype processor and to measure its performances, the trained models have been validated on several PRISMA data acquired over the Maccarese (Italy) study site with respect to ground data collected in situ, showing very promising results.
Introduction: In the tropics and subtropics, mangroves constitute a critical ecosystem that is under significant pressure despite providing a host of local to global ecosystem services, the annual value of which has been estimated at US$25 trillion (zu Ermgassen et al., 2020). These include coastal protection, provision of harvestable wood, tourism, fisheries and carbon sequestration. Mangroves form a highly dynamic biome with extensive changes occurring over short temporal baselines due to anthropogenic activities (e.g., clearing for expansion of agriculture/aquaculture and urban areas) and natural events (e.g., storms) and processes (e.g., erosion and deposition), including those exacerbated by climate change (e.g., sea level fluctuation).
The mapping and monitoring of these ecosystems have been recognized as being of critical importance for their protection and sustainable management. To this end, the Global Mangrove Watch (GMW) was established in 2011 as part of the Japan Aerospace Exploration Agency (JAXA) Kyoto & Carbon (K&C) Initiative. The first global mangrove maps (version 2.0) were made available in 2018, with a baseline map generated from 2010 (Bunting et al., 2018a) based primarily on optical (Landsat) satellite data but informed by Japanese L-band Synthetic Aperture Radar (SAR). Maps were subsequently produced for 1996, 2007, 2008, 2009, 2015 and 2016 by updating the 2010 baseline from changes detected using JERS-1 SAR (1996), ALOS PALSAR (2007–2010) and ALOS-2 PALSAR-2 (2015–2016). These GMW maps constituted the most complete maps of global mangrove extent and change available.
The version 2.0 layers nonetheless had areas missing or mapped with poor quality, most often due to a lack of cloud-free optical satellite data for the 2010 baseline. Therefore, an effort was initiated to update the version 2.0 GMW maps by improving the spatial extent, quality of the mapping and extending the annual time series to 2020.
Methods: As an update, the baseline year was maintained as 2010. However, areas where the v2.0 product was of lower quality were remapped using ESA Sentinel-2 optical data. In total, 11,262 Sentinel-2 acquisitions were downloaded and used for this analysis. Classification of mangrove extent was undertaken on a scene-by-scene basis rather than through the creation of image composites (i.e., merging multiple scenes using a metric such as greenest pixel). To derive training data for the classification, the existing GMW v2.0 map was sampled and regions outside of the GMW v2.0 map were manually annotated. The XGBoost binary classification algorithm was used for the analysis given its ability to make use of large training datasets John et al., (2020). The testing accuracies of the models (using 50,000 samples per class) were estimated to 97–99 %. The individual classifications were merged in two steps to create a probability for each pixel as to whether it is mangroves or not. To derive the final binary mask of mangrove extent, the probability surface was globally thresholded with a value of 0.5. The 2010 ALOS PALSAR imagery was finally used to perform a change detection to create a 2010 map.
The accuracy assessment was undertaken using 60 sites globally distributed. From 18 of those sites, the accuracy was found to be 94.6 % using 15,100 points, with individual sites ranging from 99.8 % to 87.4 %. The F1-score for the mangrove class was 94.8 %, with a 95th confidence interval ranging from 93.4 % to 96.0 %.
Using the updated 2010 baseline, the change was calculated using the JAXA L-band SAR data time-series, where the map-to-image change approach outlined in Thomas et al. (2018) and applied to the GMW v2.0 analysis of Bunting et al. (2018b) was employed. However, rather than only applying the change from 2010 to each year (1996, 2007, 2008, 2009, 2015, 2016, 2017, 2018, 2019 and 2020), the change detection was subsequently applied from each year to every other year, resulting in ten change maps for each year. The ten maps were summarised and only pixels that had been identified as mangroves > 5 times were classified as mangroves to produce the new (v3.0) GMW maps.
Results: The global mangrove extent for 2010 in the v2.0 product was estimated to 136,839 km2, increasing to 146,400 km2 in the v3.0 product. The additional 10,000 km2 of mangroves mapped in the v3.0 product is consistent across all the years (1996 to 2016) and largely results from the identification of areas that were not mapped in v2.0. The global trend from 1996 to 2016 was similar to that of v2.0, which identified an estimated net loss of 6,078 km2 of mangroves from 1996 to 2016, while v3.0 mapped an estimated net loss of 7,350 km2 for the same period. After 2016, the long-term global pattern of mangrove forest loss appears to have changed, with a small gain of about 400 km2 of mangroves identified from 2016 to 2020. However, given the ±1 pixel misregistration between the PALSAR and PALSAR-2 datasets, we have estimated the global confidence interval to be approximately ±5000 km2 and so this change of trend has yet to be confirmed.
The results and related statistics are visualised and available for dissemination to stakeholders in the science, government, corporate, NGO and practitioner communities via the Global Mangrove Watch Platform at https://globalmangrovewatch.org.
Future Work: Going forward, future work will be focused on the generation of a global 10 m mangrove baseline for the year 2020 using Sentinel-2. The existing GMW maps are provided at a resolution of 25 m and increasing the spatial resolution to 10 m will enable the mapping of finer features (e.g., river edges, fragmented mangroves) and a further improvement in quality over the existing GMW version 3.0 products. The work is ongoing and is expected to be completed in 2022.
Wetlands are amongst the planet‘s most productive ecosystems, providing a wealth of ecosystem services, e.g., nutrition, flood control, protection or biodiversity support. However, multiple threats like climate change, agricultural activities, hydrological modifications, etc., endanger these essential ecosystems. Thus, a consistent mapping and monitoring of global wetland ecosystems is critical to track changes and trends to support wetland conservation and sustainable management. Although EO data are ideal for large-scale wetland inventorying, their tremendous diversity makes remote detection particularly challenging. This diversity and resulting challenge have been tackled by many researchers applying different sensors (optical and radar) and mapping techniques to delineate wetland from non-wetland areas. We have developed an innovative methodology to derive information about wetlands and demonstrated the approach‘s capability throughout the past years through several projects. Our presentation will primarily focus on the results from the ESA projects “GlobWetland Africa“ and “GlobWetland Africa - Extension on Wetland Inventory“ and include details about the Copernicus High-Resolution Layer production for the Water and Wetness products in Europe. We will further present three national wetland inventories for the countries of Namibia, Tunisia, and Uganda.
Combining optical and radar observations with a hybrid sensor approach provides a more robust wetland delineation than single-sensor approaches. Optical imagery is more sensitive to the vegetation cover and radar imagery to the soil moisture content. Additionally, the higher frequency of observations stemming from the combined data streams contributes to a better characterization of seasonal dynamics, which is essential. Seasonal and temporary changes do not lead to false conclusions of the overall long-term trend in the wetland extent. Within the domain of optical remote sensing, wetland identification focuses on enhancing the spectral signature using bio-physical indices sensitive to water and wet soils (wetness) and subsequent derivation of a water and wetness probability index (Ludwig et al., 2019). Similarly, the radar-based algorithm builds on the detection of open water surfaces and surface soil moisture. After the separate optical and radar imagery processing, the data is fused into a combined water and wetness product (Water and Wetness Presence Index or WWPI) using both sensor systems‘ advantages. The fused products give information about the probability that a particular location contains water or wet soils, for which a rule-based classification can be applied to derive the wetland extent finally. Figure 1 shows an example of the output products, (a) the HRL WAW classification scheme applied to Uganda, and (b) the resulting WWPI highlighting the presence of water and wet soils.
The methodology focuses on the physical properties of water and wet soils but does not detect wetlands in the ecological sense. However, when mapping wetlands areas, one general issue is that no single scientific definition exists. The methodology applied on a large scale provides a consistent and objective map of water and wet surfaces, building the foundation for a final wetland classification according to specific user definitions and needs. Incorporating additional information (e.g., information about land use) leads to the final wetland map.
Reference:
Ludwig, C., Walli, A., Schleicher, C., Weichselbaum, J., & Riffler, M. (2019). A highly automated algorithm for wetland detection using multi-temporal optical satellite data. Remote Sensing of Environment, 224, 333-351.
Salt marshes provide extensive ecosystem services, including nursery habitat for fish species, recreation, coastal resilience, and carbon sequestration. The United States has the largest extent of mapped salt marshes. Therefore, it is critical to understand the ecosystem's carbon stock in the Contiguous United States (CONUS). While salt marshes and other blue carbon systems store most of their carbon within the soil, aboveground biomass is an important carbon indicator. Existing aboveground biomass models in tidal marshes have medium spatial resolution and limited geographic extent. To improve spatial resolution to 10 m, we evaluate the use of Sentinel-1 and 2 data for incorporation into the aboveground biomass prediction. To incorporate these satellite observations with temporally disparate in situ samples, we evaluate the stability of these locations using the Landsat time series finding that 71% of training data were stable from field sampling to remote sensing observation. Next, we conducted a data fusion machine learning regression combining Sentinel-1, Sentinel-2, and Landsat data to predict aboveground biomass in salt marshes. We compared model performance with in situ testing data across machine learning algorithms (Support-Vector Machines, Random Forest, and XGBoost), spatial scale (10 m, 30 m), and training data stability. The best performing model was the 10 m XGBoost using the stable data, which achieved Root Mean Square Error (RMSE) of 301.0 and 107.33 at the plot and site scale, respectively. We created an updated 2020 salt marsh extent with Sentinel-1/2, SRTM, and National Elevation Dataset and estimated 3.6 (3.1-4.1) Tg of aboveground carbon across the CONUS. We will also explore the drivers of salt marsh biomass at the HUC 6 watershed scale, using climate variables (precipitation and temperature), average regional sea-level rise, tidal amplitude, coastal chlorophyll, diffuse attenuation coefficient, and land use. Our results demonstrate the need to monitor these systems to enable management, restoration, and understanding resilience to climate change.
C-Band backscatter is mostly sensitive to changes in structure and water content of the canopy in
dense tropical rain forests. With the emerging Sentinel-1 time series data we can distinguish the
backscatter on different time scales. Therefore we can detect phenomena, which are not visible in
single time steps.
We use empirical mode decomposition (EMD) - a data-driven alternative to the Fourier transform –
to decompose Sentinel-1 time series into multiple sub-signals of different, characteristic timescales
(sub-seasonal, seasonal, and slow oscillations). We compare the original signal and the seasonal
sub-signal of Sentinel-1 with the water level of the Jurua River, a major tributary of the Amazon
river in western Brazil. We estimated the correlation between river water levels and Sentinel-1 time
series as an indicator for flood-related seasonality in the forest area.
We show, that the correlation of Sentinel-1 VH forest backscatter time series with the water level of
the Juruá river is higher in seasonally flooded forests than in non-flooded forests. The correlation
further increases when we use the seasonal sub-signal instead of the original Sentinel-1 signal. This
is partly due to general de-noising of the signal. Since the high correlation values are clustered in
the seasonally flooded forest, we can assume that the seasonality is also due to the common driver
of flood appearance. Analysing Sentinel-1 VV signals, we do not find such a diagnostic
relationship. While the overall correlation between Sentinel-1 VV and the water level is higher, the
correlation is not confined to flooded areas only. This indicates, that the Sentinel-1 VV signal has an
overall higher seasonality, but this seasonality is not driven by the forest flooding near the river.
Our results lead to two hypotheses: Firstly, during the flooded state of the forest there may be
double bounce scattering between the stem and the standing water from the part of the signal which
is not scattered in the canopy. In this case, the returning signlas are then depolarized again in the
canopy, which would increase the VH component of the signal during the flooded state compared to
the non-flooded state - as observed. Secondly, during the flooded state, the water content in the
canopy may be increased because of the standing water underneath, which would increase the
volume scattering directly in the canopy. Further research is necessary to distinguish the best
explanation.
Independent of the exact scattering mechanism, these results have implications for the calibration
and validation of Sentinel-1 data from the Amazon rainforest. Specifically, we cannot expect
Sentinel-1 VH signals from tropical rainforest to be homogeneous in space and time. The
observation that flooded rainforest can be discriminated from non-flooded forest with C-band time
series opens a wealth of new important applications for Sentinel-1 time series data usage. It also
encourages further research on the power of time series exploration with intelligent algorithms and
microwave backscatter modelling.
Near-real time surface water extent monitoring from SLC Sentinel-1 SAR imagery:
The SMART project case study.
Cristian Silva-Perez1, Javier Ruiz-Ramos2, Armando Marino1, Andrea Berardi2,
1 University of Stirling, Scotland, UK; 2The Open University, Milton Keynes, UK.
Introduction
Timely information on surface water extent constitutes a key decision support tool suitable for informed decision making. In the context of floods, it allows assessment of flood impact, effective risk and action management and prioritised investments in flood defences. With regards to natural ecosystems, such as wetlands, it allows monitoring critical hydrological dynamics that provide the appropriate conditions for rich biodiversity to thrive. The Landscape Sensor-based Monitoring Assessment using Remote Technologies (SMART) project provides a novel Synthetic Aperture Radar (SAR) based tool to establish a new global service aimed at providing surface water extent monitoring. The project will initially demonstrate potential in three sites as follows. In the Firth of Forth (Scotland), the aim is to monitor the extent of floods, heavy rainfall and fast snow melting events that cause river levels to rise resulting in surface water flooding affecting private and commercial infrastructure and disturbing transportation. The two additional test sites include Colombo urban wetlands (Sri Lanka) and North Rupununi wetlands (Guyana). The natural ecosystems in these two locations are characterised by seasonal floods and support important terrestrial and freshwater biodiversity, supplying local communities with a range of livelihood activities, including subsistence fishing and ecotourism. In these locations, a service to monitor the hydrological and ecological condition of the wetlands provide crucial information for sustainable development, particularly in the context of flooding and droughts emphasised by climate change.
Methods
This presentation will provide a comparison of the algorithms for near-real-time surface water extent monitoring implemented within SMART, using Single Look Complex (SLC) Dual-polarimetric (Dual-PolSAR) Sentinel-1 imagery. As a benchmark, we include the following two methodologies:
• Our previous work developed in [1] for natural wetlands monitoring based on the Cumulative Sums algorithm applied to SAR time series (SAR-CUSUM). It consists of a robust statistical and multitemporal approach to map open water and flooded vegetation areas. The algorithm extracts a dry condition reference from historical imagery and cumulates the difference between new acquisitions and this reference. This procedure highlights consecutive deviations from the dry conditions, thus enhancing any possible recurrent variation which takes place over time. Using a threshold based on the histograms of regions, open water and flooded vegetation areas can be masked out.
• A current state-of-the-art algorithm for surface soil moisture estimation [2] which utilises a stack of multitemporal VV backscatter intensity images. It infers reference dry and wet conditions for a test site and compares them via a normalised difference with every new SAR image acquisition. Since the results of the comparisons are normalised, it presents the estimated surface soil moisture in an intuitive 0 to 1 scale.
In SMART we also derive a series of novel algorithms that are included in this comparison as follow:
a) An expanded version of the multitemporal CuSum algorithm presented in [1] that includes Dual-PolSAR features such as alpha angle and entropy, derived from the dual polarimetric pixel covariance matrices.
b) A set of change detection approaches based on optimisations of covariance matrices as presented in [3]. The PolSAR detectors have been designed to identify not only the intensity of change between images, but also the type of change expressed by the change in scattering mechanisms. For the SMART project, we adapted two of these detectors for the Dual-PolSAR case: A change detector based on the difference of covariance matrices and a detector based on the ratio of these covariance matrices. Given the difference in the interpretation for each of them, complementary information may be obtained.
c) A deep learning-based approach that uses the well-known U-NET image segmentation model [4] for supervised surface water extent monitoring. For this method, a training and testing dataset was created from the semi-automatic flood maps produced by the Copernicus Emergency Management Service (human-adjusted maps produced by a computer algorithm). We test different combinations of input features, including backscatter intensities and dual-PolSAR features derived from SLC Sentinel-1 imagery.
Results
Preliminary results presented in [1], show a rigorous statistical flood monitoring tool that was able to demonstrate a 90% accuracy in detecting the extent of open water flooding in the Guyana demonstration site. In addition, preliminary results show that including the phase information within the SLC images improves the accuracy, especially for floods under vegetation. In the presentation, we will show the exact figures for accuracy on the algorithm comparison once the SLC data of Sentinel-1 is included in the retrieval algorithms. This considers analysis on the benefits/limitations of employing Dual-PolSAR SLC data and unsupervised/supervised learning approaches. It will also highlight the bespoke elements required to map surface water on natural ecosystems (wetlands) and other terrains (flood mapping).
An additional point of this presentation will be to introduce the easy-to-use visualisation tools developed in SMART (web and mobile mapping apps), tailored for non-specialist user communities to use the results obtained by the algorithms. This is due to the fact that the visualisation and mapping platforms are developed in close collaboration with communities affected by floods through the use of capacity-building programs. The visualisation tools are designed to include environmental and social information in order to support decision-making in relation to flooding (e.g., regarding mosquito-borne diseases, flood monitoring and planning, biodiversity conservation, infrastructure development and agriculture).
Acknowledgement:
This research was supported by the SMART project, funded by the UK Space Agency’s Partnership in Innovation Development (Pin2D). The SMART consortium comprises The Open University, University of Stirling, and the Cobra Collective CIC. Sentinel-1 data were provided courtesy of ESA. Validation optical imagery were provided courtesy of Planet.
References
[1] Ruiz-Ramos, J., Marino, A., Berardi, A., Hardy, A., & Simpson, M. (2021, July). Characterization of Natural Wetlands with Cumulative Sums of Polarimetric SAR Timeseries. In 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS (pp. 5899-5902). IEEE.
[2] Bauer-Marschallinger, B., Freeman, V., Cao, S., Paulik, C., Schaufler, S., Stachl, T., ... & Wagner, W. (2018). Toward global soil moisture monitoring with Sentinel-1: Harnessing assets and overcoming obstacles. IEEE Transactions on Geoscience and Remote Sensing, 57(1), 520-539.
[3] Marino, A., & Nannini, M. (2021). Signal Models for Changes in Polarimetric SAR Data. IEEE Transactions on Geoscience and Remote Sensing.
[4] Ronneberger, O., Fischer, P., & Brox, T. (2015, October). U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention (pp. 234-241). Springer, Cham.
Wetlands provide many essential ecosystem services, but at the same time they are threatened by processes like e.g., urbanization, expansion of farmland and extraction and pollution of freshwater. They play a key role in global carbon, water, and nutrient cycles, have a strong impact on local and regional climate and are essential for global food and water security. Due to their heterogeneous characteristics, spatiotemporal dynamics and topographic features, wetlands are often hard to identify, map and monitor, which hinders their efficient conservation and recognition in policy frameworks and management practices. Ongoing developments in the field of Earth Observation (EO) provide an opportunity to improve this situation. This has been empowered especially by freely available satellite programs like the USGS Landsat imagery and the Copernicus program.
GEO-Wetlands is a collaborative framework formed as an initiative within the Group on Earth Observations (GEO) with the ambition of growing into a GEO flagship in the coming years. The Mission of GEO-Wetlands is to develop sustained global approaches for EO based wetland inventory, mapping, monitoring & assessment to support global policy frameworks like the Ramsar Convention on Wetlands, the UN Sustainable Development Goals, the UNFCCC Paris Agreement, and the Sendai Framework for Disaster Risk Reduction.
Since 2016, GEO Wetlands has been initiated and supported through several research and innovation projects funded e.g., through the European Commission, European Space Agency, Japan Aerospace Exploration Agency, German Space Agency, and others. These project activities led to the formation of a GEO-Wetlands community and the development of a collection of tools, methods, datasets, guidelines and pilots of a geospatial wetland portal and a web-based knowledge collection. The ambition for the coming years is to develop this collection into a GEO-Wetlands toolkit that is freely available, easily accessible, and continuously developed and updated to support global wetland practitioners and decision makers.
A strategy is currently being developed to grow GEO-Wetlands into a GEO Flagship with its activities being embedded in the Ramsar strategic plan, SDG 6.6, and the UNFCCC global stocktake. This entails the establishment of a permanent GEO-Wetlands secretariat responsible for management of the flagship and its community, fundraising, communication and maintenance of the GEO-Wetlands toolkit and website.
GEO-Wetlands is an open and inclusive partnership welcoming contributions and participation from organizations, initiatives, companies, and individuals from all parts of the world and from all sectors related to the use, management, protection, restoration and monitoring of wetlands. New projects over the coming years will allow this partnership to grow and strengthen and to develop new approaches and components for the GEO-Wetlands toolkit with the final goal of developing global wetland maps, products, and statistics to support the wise use and sustainable management of all wetland types and their ecosystem services worldwide.
The threat of sea level rise to coastal communities is an area of significant concern to the well-being and security of future generations. Environmental policy actions and decisions affecting coastal states are being made now. Given the considerable range of applications, sustained altimetry satellite missions are required to address operational, science and societal needs. This article describes the Copernicus Sentinel-6 mission that is designed to address the needs of the European Copernicus programme for precision sea level, near-real-time measurements of sea surface height, significant wave height, and other products tailored to operational services in the climate, ocean, meteorology, and hydrology domains. It is designed to provide enhanced continuity to the very stable time series of mean sea level measurements and ocean sea state started in 1992 by the TOPEX/Poseidon (T/P) mission and follow-on Jason-1, Jason-2 and Jason-3 satellite missions. The mission is implemented through a unique international partnership with contributions from NASA, NOAA, ESA, EUMETSAT, and the European Union (EU), with additional technical support provided by CNES. It includes two satellites that will fly sequentially (separated in time by about 5 years). The first satellite, named Sentinel-6 Michael Freilich, launched from Vandenberg Air Force Base, USA on 21 November 2020. The main payload is the Poseidon-4 dual frequency (Ku/C-band) nadir-pointing radar altimeter providing synthetic aperture radar (SAR) processing in Ku-band to improve the signal through better along-track sampling and reduced measurement noise. The altimeter has an innovative interleaved mode enabling radar data processing on two parallel chains, one with the SAR enhancements and the other furnishing a "Low Resolution Mode" that is fully backward-compatible with the historical T/P and Jason measurements, so that complete inter-calibration between the state-of-the-art data and the historical record can be assured. A three-channel Advanced Microwave Radiometer-Climate Quality (AMR-C) developed by NASA JPL provides measurements of atmospheric water vapour that would otherwise degrade the radar altimeter measurements. An experimental High Resolution Microwave Radiometer (HRMR) is also included in the AMR-C design to support improved performance in coastal areas. Additional sensors are included in the payload to provide Precise Orbit Determination, atmospheric sounding via GNSS-Radio Occultation and radiation monitoring around the spacecraft.
This presentation provides an overview of the Sentinel-6 Michael Freilich Mission highlighting the current status of the mission, satellite, ground segment and products, and includes an outlook for the mission.
The main payload of the Sentinel-6 Michael Freilich (S6-MF) is the Ku-band Poseidon-4 Radar Altimeter (Poseidon-4) that has been in orbit since Nov 2020 and was developed based on the in-orbit performances of the Jason-2 mission in addition to, for the first time, requirements concerning Global Mean Sea Level (GMSL) drift. The design also took into account the in-orbit performance of the CryoSat-2 mission and the CryoSat-2/Sentinel-3 altimeter definitions, and explored further enhancements that now demonstrate the capability for both unfocussed and focussed processing capabilities for future applications. The Poseidon-4 design and its performances has provided the basis for future missions, such as CRISTAL and potentially for the Sentinel-3 Next Generation satellites.
In order to meet the requirements on drift the Poseidon-4 has in its design and operations the capability to calibrate the Level 1b product at 20 Hz in terms of power and delay, power and phase at the burst level and provide the capability to adapt the on-board chirp in order to counter drifts of the instrument range response shape (in particular, the main-lobe width, for example). The design has established the functionality to expand on the full control transponder commanding capability for use also with corner reflectors which will be a game change for global external calibration enhancement once fully demonstrated and characterised. Furthermore, the altimeter embarks a mode that reduces data volume by a factor 2, labelled Range Migration Compensation (RMC) that has now been fully demonstrated in orbit.
The presentation will show the high level design and establish the in-orbit characterisation of both the nominal and redundant and the in-orbit internal and external calibration and some geophysical retrievals that demonstrate the performance of the RMC, noting how both unfocussed and focussed SAR has been demonstrated, opening up a new range of applications.
The Sentinel-6 B satellite is due for launch 2025 (TBC) and is undergoing final testing before being placed in storage in summer 2022. The presentation will highlight any key results from the on-ground testing.
The success of the S6 concept and development, going back to original concept of 2007, and its international payload demonstrates the excellent partnership, collaboration and team spirit of CNES, EUMETSAT, ESA, NASA and NOAA and European satellite, payload and ground segment industries.
Acknowledgement
The authors wish to acknowledge the support of the teams of Aresys, isardSAT and CLS, developing the specifications for the ground processor my means of prototyping, European industry supplying the satellite and payload, Airbus (Friedrichsafen, Germany), Thales Alenia Space (Toulouse, France) , Ruag (Vienna, Austria), TSA (Elancourt, France) and partner Agencies CNES, EUMETSAT, NASA-JPL and NOAA providing the system and mission performance expertise under the Sentinel-6 Mission Performance Working Group (MPWG).
The Sentinel-6 Michael Freilich is the newest reference satellite altimetry mission. It includes a radar altimeter, a precision orbit determination suite and a radiometer to measure wet tropospheric path delay. The AMR-C radiometer on Sentinel-6 includes two new innovations compared to the prior Jason-series missions. The first innovation is the inclusion of a secondary calibration system external to the radiometer that is used to stabilize the wet path delay measurement to 0.7mm/yr over 5+ years. The second is a high-frequency radiometer (termed HRMR) with less than 5km spatial resolution that improves the measurement near land and sea ice boundaries. We will present the performance of the AMR-C and HRMR radiometer systems over the first 1.5 years of in-flight operation. We show that the secondary calibration system provides an unprecedented level of stability and is a new standard for calibration stability for altimeter missions. We also show that the HRMR is providing path delay with less than 1cm uncertainty to within 5km from land, a dramatic improvement from prior altimeter missions.
Sentinel-6 PDAP products assessment over ocean
Launched on 21 November 2020, Sentinel-6 Michael Freilich is a Copernicus satellite designed to ensure the continuity to the mean sea level climate time series measured by the TOPEX/Poseidon and Jason satellites since 1992. The main payload carried on-board Sentinel-6 is the Poseidon-4 (POS4) dual frequency radar altimeter. POS4 uses a 9 kHz Pulse Repetition Frequency and an innovative interleaved chronogram which allows the optimization of the number of measurements acquired. Like this interleaved mode enables the acquisition of two modes in parallel: a Low Resolution Mode (LR) which aims in extending the legacy of the mean sea level record, and a High Resolution Mode (HR) or Synthetic Aperture Radar Mode (SAR) that significantly improves the along-track spatial resolution and reduces measurement noise. Downlink of both LR and HR data is enabled thanks to the on-board Range Migration Correction (RMC) algorithm which allows HR data volume reduction.
Started right after the Launch and Early Orbit Phase, the Commissioning Phase aimed in calibrating and validating Sentinel-6 data after they are processed on-ground. It ended on 29 November 2021 with the dissemination of all Sentinel-6 products to the operational users after a successful In-Orbit Commissioning Review.
In this presentation we focus on the calibrated and validated altimetry products processed and disseminated by the EUMETSAT PDAP ground segment. All the NRT/STC/NTC timeliness products are assessed, and particular attention is given for NTC product quality assessment. Both LR and HR performances are addressed.
POS4 instrument performances from the PDAP Long Term Monitoring are briefly summarized and it is shown that even if POS4 presents a sensitivity to temperature all the instrument specification requirements are met. For all latencies, it is shown that the LR requirements are fulfilled for all latencies and that data products show good continuity with Jason-3. GMSL inter-mission bias is well characterised after about 6 months and GMSL continuity with Jason-3 is expected, although a longer time series is needed to fully demonstrate this. It is also shown that HR requirements are met for all latencies, except for SWH that are overestimated at high wave heights due vertical wave velocity impacts (a known issue in SAR altimetry). HR RMC is also assessed and presents performances very similar to the HR RAW data performances over open ocean and in coastal regions.
This work presents the first calibration results for the Sentinel-6 using the permanent CDN1 transponder Cal/Val facility during its tandem phase with Jason-3. Evaluation has started on 18-Nov-2020 and continues without interruption till today. The CDN1 Cal/Val is part of the ESA Permanent facility for Altimeter Calibration and serves as a dedicated external calibration source for multi-mission (S3A, S3B, Jason-3, CryoSat-2) satellite altimetry.
At the time of writing, Sentinel-6 has passed over the CDN1 Crete transponder more than thirty times. For these passes, the Sentinel-6 high-resolution Short Time Critical and Non Time Critical products have been analyzed by an international working group headed by ESA to assess performance of the Poseidon-4 altimeter. In particular, the instrument performance in the sense of absolute range bias and datation bias were assessed together with the instrument capabilities in terms of achievable resolution in both the along- and the across-track direction. Diverse processing methodologies and algorithms were implemented within this group to cross-validate results.
The performance of Poseidon-4 altimeter is here presented, considering both the nominal and the redundant instrument. The performance of Poseidon-4 altimeter is also evaluated against the reference Jason-3 Poseidon-3 altimeter, taking full advantage of this tandem mission for both satellites. These preliminary results show that the Sentinel-6 altimeter performance is within mission requirements .
In 2017, the National Research Council released the second Earth Science Decadal Survey (ESDS). The ESDS recommended four sets of measurements referred to as the Decadal Observables. One of these was the Surface Biology and Geology (SBG) Decadal Observable (DO). The Decadal Observable measurements together with measurements from the upcoming NISAR mission are now referred to as the Earth System Observatory (ESO). The SBG-DO called for high spectral and spatial resolution measurements in the visible to shortwave infrared (VSWIR: 0.38-2.4 μm) and high spatial resolution multispectral measurements in the mid and thermal infrared (MIR: 3-5 and TIR: 7-12 μm). The MIR and TIR (MTIR) measurements would be made every few days and VSWIR measurements every couple of weeks. The VSWIR and MTIR measurements would have spatial resolutions of 30 m and 60 m respectively. These measurement requirements were based, in part, on those recommended for the Hyperspectral Infrared Imager (HyspIRI) mission recommended in the prior ESDS. After the release of the 2017 ESDS, NASA formed teams to develop architectures for each of the DO’s. The SBG team recommended the VSWIR and MTIR measurements be made from two separate platforms in a morning and afternoon orbit respectively. The SBG team also recommended a technology demonstration with a constellation of smaller spacecraft with VSWIR instruments in a morning orbit. The morning orbit was preferred for the VSWIR measurements to minimize cloud cover and the afternoon preferred for the MTIR to measure the peak temperature stress of plants typically occurring in the early afternoon. The architecture team recommended global revisit times for the VSWIR and MITIR of revisit times of 16 and 3 days respectively, which resulted in swath widths of 185 km and 935 km from the nominal altitudes chosen for the VSWIR and MTIR platforms respectively.
SBG is a global survey mission that will provide an unprecedented capability to assess how ecosystems are responding to natural and human-induced changes. It will help us assess the status of biodiversity around the world and the role of different biological communities on land and within inland water bodies, as well as coastal zones. It will help identify natural hazards, in particular volcanic eruptions, and any associated precursor activity, and it will map the mineralogical composition of the land surface. In summary SBG will advance our scientific understanding of how the Earth is changing as well as provide valuable societal benefit, in particular, in understanding and tracking dynamic events such as volcanic eruptions, wildfires and droughts.
As part of the risk reduction activities for the MTIR of HyspIRI, a space-ready Prototype -the HyspIRI Thermal Infrared Radiometer (PHyTIR) was developed in the laboratory to mature the technology readiness of the instrument and an airborne Hyperspectral Thermal Emission Spectrometer (HyTES) was developed to acquire antecedent data for science studies. PHyTIR matured the technology readiness level (TRL) of certain key subsystems of the TIR imager. In 2014 the ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS) was selected as part of the NASA Earth Ventures Instrument program. ECOSTRESS used the components developed with PHyTIR. ECOSTRESS addresses critical questions on plant–water dynamics and future ecosystem changes with climate. ECOSTRESS has five TIR spectral bands, a spatial resolution of 68m x 38m (crosstrack x downtrack) and a revisit of every few days at varying times of day from the International Space Station (ISS). ECOSTRESS was delivered to the ISS in 2018 and operations began shortly thereafter. ECOSTRESS was planned to operate for one year, however, due to demand as well as the instrument continuing to operate well, NASA extended the mission until 2023.
HyTES represents a new generation of airborne TIR imaging spectrometers with much higher spectral resolution and a wide swath. HyTES is a pushbroom imaging spectrometer with 512 spatial pixels over a 50-degree field of view. HyTES includes many key enabling state-of-the-art technologies including a Dyson-inspired spectrometer and high performance convex diffraction grating. The Dyson optical design allows for a very compact and optically fast system (F/1.6) and minimizes cooling requirements since a single monolithic prism-like grating design can be used which allows baffling for stray light suppression. The monolithic configuration eases mechanical tolerancing requirements which are a concern since the complete optical assembly is operated at cryogenic temperatures (~100K). HyTES originally used a Quantum Well Infrared Photodetector (QWIP) and had 256 spectral channels between 7.5μm to 12μm. In 2021 this was upgraded to a Barrier InfraRed Detector (BIRD) array with 284 spectral channels. The first science flights with the QWIP were conducted in 2013 and the first science flights with the BIRD in 2021. Many flights have been conducted, and the instrument can now be deployed on a Twin Otter or the NASA ER2 aircraft allowing a variety of pixel sizes depending on flight altitude. In 2022 the instrument will also be deployed on the NASA Gulfstream V aircraft. All the data acquired thus far has been processed and is freely available from the HyTES website (http://hytes.jpl.nasa.gov). Higher level products surface temperature and emissivity and gas maps are available for the more recent data.
This presentation will describe the current status and plans for SBG, ECOSTRESS and HyTES programs as well as provide some recent results from ECOSTRESS and HyTES.
The Land Surface Temperature Monitoring (LSTM) mission aims to address water, agriculture and food security issues by monitoring the variability of Land Surface Temperature (and hence evapotranspiration) at the European field scale enabling more robust estimates of water productivity. The LSTM mission observations will support the Copernicus land monitoring service, related European and also global and international policies as well as downstream applications.
In this study, land surface temperature (LST) estimates with the Temperature and Emissivity Separation (TES) method and evapotranspiration (ET) with the simplified surface energy balance index (S-SEBI) model have been obtained from airborne data acquired in the framework of the SurfSense 2018 and LIAISE 2020 experiments in support of the LSTM mission.
The chosen test areas for data collection are located in Italy and Spain presenting both Mediterranean climate with very mild wet winters and very hot dry summers. The experimental sites consists of a large irrigated flat areas with growing crops (mainly corn and alfalfa) in which in situ measurements of LST, radiation fluxes and evapotranspiration were taken during the campaign generating a large database for validation purposes. Acquisition of images was performed by Thermal Airborne Spectrographic Imager (TASI) hyperspectral thermal sensor in the range of 8 to 11.5 microns and across 32 spectral bands. For Visible Near Infrared (VNIR) data, HyPlant airborne sensor provided the spectral information from 370 nm to 2500 nm necessary for evapotranspiration retrievals.
LST performance was analyzed by the application of TES algorithm to different band configurations of TASI sensor and simulated bands configuration of the LSTM mission. Integrating LST and VNIR data, instantaneous values of evapotranspiration were also estimated and validated against eddy covariance measurements.
One of the obvious effects of global change is reflected in land surface temperature (LST) anomalies and interannual variability of evaporation (E). LST carries the imprints of surface water availability and is immensely sensitive to evaporative cooling and soil moisture variations. It constrains the magnitude and variability of the surface energy balance (SEB) components and is a preeminent variable for retrieving E in the terrestrial ecosystems. To understand the terrestrial ecosystem functioning, advanced monitoring of the terrestrial biosphere response to water stress and further agricultural water consumptive use is overarching. ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS) has been providing high spatio-temporal thermal infrared (TIR) observations (~70 m, multiple revisits per day) since its launch in 2018. Taking advantage of the ECOSTRESS images accessible to the public, the European ECOSTRESS Hub (EEH) aims at producing LST and E data from models with different structures and parameterization schemes over Europe and Africa. In the EEH, LST products are retrieved from the Split Window (SW) and Temperature Emissivity Separation (TES) algorithms. Retrieval of evaporation in the EEH is based on three models, namely Surface Energy Balance System (SEBS) and Two Source Energy Balance (TSEB) parametric models, as well as the analytical Surface Temperature Initiated Closure (STIC) model. Along with the analysis ready data stored on a cloud platform, users are also provided with access to running the selection of models through an interactive interface backed by a supercomputing system. A preliminary evaluation of the EEH LST and ET products in 2018 showed promising results, in good agreement with the official products from NASA/JPL and in-situ measurements. The unique feature of EEH is that both the LST algorithms are driven by homogenized radiance and environmental datasets, and all the evaporation models are forced by uniform upper boundary and lower boundary conditions. This characteristic enables appropriate comparisons among different models for a large spectrum of energy and water availability scenarios. Overall, the EEH will serve as a support to the next generation Copernicus High Priority Candidate Land Surface Temperature Monitoring (LSTM) mission.
In this work, Land Surface Temperature (LST) estimation techniques, applied to remote sensing data, offer a systematic and precise detection of thermal anomalies in Italian geothermal areas. LST was retrieved by means of two different methodologies and results were validated thanks to ground measurements collected during field campaigns and/or cross-validation methods. The main analyses were conducted using nighttime satellite data, in order to reduce the solar effects and comparison results showed very good agreements. Moreover, the comparison between ground data collected during the morning and LST retrieved by daytime satellite data showed also good agreements.
The use of three sensors (ASTER, ECOSTRESS and Landsat-8), despite the different GSD and LST estimation methodologies, have produced results with high correlation offering the possibility to extend the LST time series. ASTER is one of the more versatile satellite imagers used for studies of thermal anomalies; it can estimate surface temperatures with several thermal infrared spectral channels. TIRS/Landsat 8 images, while having fewer spectral channels and slightly lower spatial resolution than ASTER, provide additional temperature data for estimating and monitoring LST on active volcanoes as well as geothermal areas. The recent ECOSTRESS sensor, very similar to ASTER, could increase the number of data. The employing of these three space sensors has proven cost-effective technical method for generating products for the detection of geothermal anomalies.
The well-known methodologies have been used to evaluate the surface temperature on two test sites characterized by different geological features: the volcanic area of Solfatara-Campi Flegrei and the geothermal area of Parco delle Biancane. A cross-comparison test has been conducted comparing the LST estimated by ASTER, ECOSTRESS and Landsat 8 with the surface temperature estimated by NASA-HyTES sensor and UAV survey. During the last field campaign (June 2019), data acquired by Twin-Otter Aircraft of HyTES project (NASA/JPL/ESA) have been collected thanks to a collaboration with NASA/JPL.
Moreover, measuring gas emissions of eruptive volcanoes is a risky task that cannot be performed by hand portable or backpack carried gas analysis systems. Satellite based remote sensing and near remote sensing instruments are useful to provide gas flux information when is not possible to perform in situ sampling, but not all gases of interest can be achieved with this method and they still require in situ data validation to provide a proper measurements of the gas fluxes emitted by the volcano. The measurement of volcanic gases as CO2, H2S and SO2 emitted from summit craters and fumaroles is a crucial parameter to monitor the volcano activity. The measurements of passing degassing plumes (non-eruptive) has been achieved by combining satellite data with local airborne measurements using UAV (Unoccupied Aerial Vehicle) and ground field in-situ measurements. We include the measurements of a miniature multi-Gas analysis system (called miniGAS) to measure H2O, CO2, H2S and SO2 gases, and a field portable backpack mass spectrometer system (called MPH Explorer) designed for in situ volcanic gas analysis.
Land surface temperature (LST) is a key variable in the study of the thermal environment, modelling of surface energy fluxes, estimation of evapotranspiration and soil moisture, and the characterization of urban heat island effects [1,2]. LST can be effectively retrieved from remotely sensed data in the thermal infrared part of the spectrum (TIR).
To properly capture the high variability of complex areas, such as an urban environment, both spatially and temporally dense LST data are needed. Unfortunately, sensors on board satellites with high revisit time usually cannot adequately provide detailed spatial information, whereas high spatial resolution sensors have typically a low revisit time. Among the satellite TIR sensors in operation, Landsat-8 TIR sensor provides 100m spatial resolution imagery, which is well suited to capturing surface details, but its long revisit cycle of 16 day has limited its use in generating a temporally continuous LST dataset. In this context, downscaling low-resolution imagery is necessary to bridge the existing gap and make available frequent thermal data at a fine spatial resolution.
In the literature, several methods have been developed to generate daily LST at fine spatial resolution, which blend information from different sensors and/or different spectroscopic bands[3,4]. A widely studied strategy is the use of daily TIR observations provided by the Moderate-resolution Imaging Spectroradiometer (MODIS) at 1 km nominal spatial resolution as precursor variables. Despite all the progress, existing algorithms are still subject to several key limitations, among which: (1) the need of a high resolution image as reference; (2) mixing effects due to the heterogeneity of neighboring pixels and which are difficult to control; (3) not taking into account the change in the local variance; (4) sudden changes and disturbances are hardly detected. Moreover, in recent studies the use of linear models in the downscaling procedure has been criticized, advocating the need of highly non linear approaches to obtain accurate results, coupled with the use of a large number of predictors from different spectral indices[5].
This study focuses on the relations between the Landsat and daily MODIS TIR data, with the aim of analyzing, and possibly overcoming, these difficulties found in the downscaling process. We show that there are linear correlations between the two datasets when the data are properly aggregated. We then propose an effective downscaling strategy for the reconstruction of the missing Landsat LST images over an area of interest, which only makes use of the coarse resolution MODIS images at the prediction days, and the Landsat/MODIS correlations retrieved from the analysis of the Landsat/MODIS historical series. The new approach is tested over urban and non-urban environment. This methodology could be used in the constellation composed of Sentinel-3 SLSTR and the future LSTM mission.
[1] D. Quattrochi, et al., Thermal Remote Sensing in Land Surface Processing, CRC Press (2004).
[2] Q. Weng, Techniques and Methods in Urban Remote Sensing, Wiley (2019).
[3] F. Gao, et al., Fusing Landsat and MODIS Data for Vegetation Monitoring, IEEE Geoscience and Remote Sensing Magazine, 3 (2015) 47.
[4] J. Wang, et al., Thermal unmixing based downscaling for fine resolution diurnal land surface temperature analysis, ISPRS Journal of Photogrammetry and Remote Sensing, 161 (2020) 76.
[5] V. Moosavi, et al., A wavelet-artificial intelligence fusion approach (WAIFA) for blending Landsat and MODIS surface temperature, Remote Sensing of Environment, 169 (2015) 243.
The TRISHNA mission (Thermal infraRed Imaging Satellite for High-resolution Natural resource Assessment) is a cooperation between the French (CNES) and Indian (ISRO) space agencies, to be launched in 2025. It is intended to measure during 5 years approximately twice a week the thermal infrared signal of the surface-atmosphere system globally and at 60-meter resolution for the continents and the coastal ocean, and a resolution of 1000 meters over deep ocean.
The mission focuses on the estimation of evapotranspiration, and will contribute to the detection of water stress, the monitoring of irrigation and its management. It will provide direct information on the use of water, and in particular its agricultural consumption. These data will also indirectly contribute to water availability assessments and studies on the natural or anthropogenic causes of the variation in groundwater level.
TRISHNA and its frequent high-resolution measurements raise major scientific, economic and societal issues through the 6 major themes that the mission addresses from the angle of research and development of applications: ecocystem stress and water use (through the monitoring of agriculture and water content of natural vegetation), coastal and inland waters (sea, lakes, rivers), urban ecosystem monitoring, cryosphere, solid Earth, atmosphere.
The main requirements which are at the origin of the definition of the mission are the following: temporal resolution (capacity to produce an observable every 3 days), spatial (access to the scale of the agricultural plot), spectral (visible and near infrared capacity in support of thermal infrared, with the following spectral bands, radiometric (measurement precision). 7 spectral bands will be available in the visible to short-wave infrared part of the spectrum: blue, green, red, Near Infrared (865nm), water vapor (910nm), cirrus (1.38µm) and Short-wave infrared (1.6µm). 4 spectral bands will be available in the thermal infrared.
Moreover, the system design is also driven by the objective of demonstrating that this kind of data, when delivered with a low latency, can be used to raise an early warning signal, in order to prevent the effects of droughts on agricultural surfaces or for issues related to the health of agro-forest ecosystems, but also of wetlands and coral reefs.
As far as products are concerned, the specificities of the mission play an important role: there will be one mission center in India and one in France, each mission center being able to deliver all types of data, with a full coordination including common Algorithm Theoretical Basis Documents and a coordinated process implying frequent and in-depth cross-calibration from early validation phases to the operational phase. TRISHNA mission center will deliver level 1C, level 2 and level 3 data.
The TRISHNA products are defined to answer a wide range of use of the data, from science to applications.
Following the CEOS definition, level 1C data consist of Top-Of-Atmosphere radiometrically and geometrically calibrated reflectances in each of the 7 visible and whort-wave infrared channels, and radiance in each of the 4 thermal infrared channels. A raw cloud mask is also provided, based on spectral thresholds. From this level on, the data are orthorectified and resampled on a uniform spatial grid. In order to ease the use of TRISHNA data together with other missions and especially with the present and future Copernicus data, it has been decided to use Sentinel-2 tiles and Copernicus Digital Elevation Model.
Level 2A data are surface radiative variables: surface reflectances in 5 visible and short-wave infrared channels, Land Surface Temperature or Sea Surface Temperature, and Land Surface Emissivity in the 4 thermal infrared channels. Level 2A products also include Total Water Vapor Column and a refined cloud mask from multi-spectral and multi-temporal processings.
Level 2B variables are still under definition: albedo; vegetation indices computed from visible and near infrared data: NDVI, Leaf Area Index (LAI) and Fraction of Vegetation Cover. The variables necessary to compute the energy budget at the time of the acquisition will also be delivered: Net radiation, Ground Heat Flux and Evaporative Fraction. Finally, daily evapotranspiration and water stress will also be available for each day for which a TRISHNA acquisition is available.
Level 3 products, also under definition, are constituted of temporal and spatial synthesis of level 2 data. Moreover, as Soil-Vegetation-Atmosphere Transfer Models (SVAT) and Crop Simulation Models provide continuous simulation of evapotranspiration, they can be used for interpolating evapotranspiration between remote sensing data acquisitions with the objective of delivering daily evapotranspiration and daily water stress on a day-to-day basis as level 3 products.
As far as data production and distribution in the French mission ground segment are concerned, CNES Computing Center offers a user-oriented service and will include by TRISHNA launch a computing platform (next version of the current HPC – High Performance Computing) and a new storage platform (DataLake). A simplified and unified access to spatial data will be ensured through the GeoDataHub, an organization around Earth Observation data hosted at CNES. It will allow to supply new services for distributing ready-to-use information to the land surfaces and hydrology user community, extracted from TRISHNA products, but also from other high resolution multi spectral optical and thermal data. In order to build this, an open data policy is a huge asset, and the economic and social benefit of this kind of policy is becoming more and more accepted, especially in the scientific community.
As we get further away from the SDG 2 target of Zero Hunger, food security remains one of the most pressing issues we face, especially in the context of increasing extreme weather events under a warming climate. As such, innovation in developing robust and scalable measures to monitor the world’s crops in a timely, transparent manner is a key component in helping to address this global challenge. With recent major advances in Earth observing (EO) satellites, cloud compute, GPS technologies, and machine learning/artificial intelligence, we currently have the data and tools needed to monitor and track nearly every field across the globe on a near daily basis. COVID-19 continues to touch nearly every aspect of our daily lives, and recent droughts, floods, supply chain issues, and conflict have devastated livelihoods and impacted agricultural production, leading to an unexpected relevance and urgency regarding the need for improved agricultural information, and serving to further highlight information gaps that satellite data can help fill. Understanding production prospects in near real time has never been more important in order to direct and prioritize early warning & proactive food security response and support well-functioning agricultural markets.
In 2016 in an effort to help address these challenges, NASA’s Applied Sciences Program called for a new concerted effort and, for the first time, openly competed for a program on agriculture and food security. The NASA Harvest Consortium, led by the University of Maryland, was selected, and in November 2017 became NASA’s official program on Agriculture and Food Security. It is a stakeholder-driven program, motivated by the fact that more timely and accurate agricultural information, as enabled by EO data and advancing technologies, can significantly enhance key agricultural decisions, whether by humanitarian organizations, governments, insurance companies, or farmers. It is run as a multi-sectoral Consortium aimed at enabling and advancing the awareness, use, and adoption of satellite Earth observations by public & private organizations to sustainably benefit food security and agricultural resilience in the US and worldwide. The NASA Harvest Consortium is comprised of more than 50 members spanning across the public, private, non-government and government, intergovernmental organizations, and the humanitarian sectors alike. The Consortium is led by researchers at the University of Maryland, which provides a hub with distribution partners and activities. Harvest is also NASA’s contribution to the international GEOGLAM program, mandated by the G20 in 2011 to increase market transparency and improve food security, and builds on the partnerships and work established through GEOGLAM. The consortium model has the advantage of focusing multiple institutions and partner organizations on specific problems and tasks, with more agility than individual conventional research proposals.
NASA Harvest works at global, regional, national, and field levels in agricultural systems that range from subsistence to large-scale commodity production. The program has three impact areas: agricultural land use, sustainability, and productivity. It aims to improve these three areas by advancing the quality, availability and timeliness of EO-based products and methods in crop land and crop type mapping, crop condition monitoring, crop statistics generation, crop yield forecasting and estimation, and cropping practices characterization. To accomplish this, its program of activities is designed to advance the state of the science and the state of use through innovation in field data collection and sharing, public-private partnerships, open trans-disciplinary data platforms, data integration, data science and capacity development. This talk will provide an overview of the NASA Harvest program and will highlight examples of its work and impact across the agricultural markets, humanitarian and private sector domains.
1. Introduction and context
Under challenging and changing climatic conditions, food security is under pressure in a number of countries across the world and in particular in Africa. Local food production can be reduced by erratic weather conditions like drought and flooding but also by other threats like locust swarms. Crop type mapping and crop area estimates represent basic information of primary importance for crop monitoring. In addition, providing high quality crop maps and area estimates can enhance food security since stakeholders can act on this valuable information. The use of remote sensing techniques is ideal for rapid and affordable crop type mapping since large areas are monitored with a spatial resolution and a spectral detail that are constantly improving.. However, the production of reliable crop type maps requires expensive field campaign operations to collect ground truth, specific remote sensing knowledge and processing capacity for the timely production of the information. The use of such approaches has rapidly increased in recent years thanks to the availability of dense Sentinel 1 & 2 time series and in situ data. However, there is still large potential in developing their operational use to derive crop type area statistics thanks to larger and more systematic in situ data collection to train machine learning algorithms. Agricultural monitoring systems in developing countries in particular would benefit from support to full access and exploitation of innovative data and methods and the GEOGLAM network is one way to coordinate these efforts.
Therefore, the main objective of the Copernicus service Copernicu4GEOGLAM is to strengthen the EU support to the GEOGLAM initiative in developing countries and respond to requests from countries to provide ad-hoc baseline crop monitoring information including crop-type mapping and area estimates during and at the end of the growing season.
During this first year of activity, the service targeted three Areas of Interest (AoI) of about 100,000 km² each in Kenya, Tanzania and Uganda. Results of the mapping service for the first growing season processed are presented in this paper.
2. Field campaign
The objective of the field campaign is twofold: to provide training and validation data for the satellite image thematic classification, and to produce accurate and unbiased area estimates for the most important crops grown in the selected AoIs.
Therefore, a probabilistic sampling was applied based on a stratified systematic random approach to ensure that collected data could already be used directly to produce unbiased crop area estimates. On average 300 to 400 Primary Sample Units (PSUs) were selected for each AoI. These PSUs were visually interpreted based on available VHR imagery from virtual globes and latest Sentinel 2 imagery from the current growing season to delineate field parcel boundaries and non-cropped land use. PSUs were then surveyed in the field by a team of enumerators using a smartphone app to collect and upload the data on a daily basis to a central server. The data collected was checked daily for any missing or erroneous information so that immediate mitigating action could be taken if necessary.
More than 10,000 crop observations were collected for the selected PSUs and more than half of these sample units were covered with a mixed cropping pattern.
3. Satellite imagery classification
The data collected in the field was post-processed and split randomly in a training (75% of the data) and validation (25%) dataset. For training, field boundaries were eroded and smallest field parcels discarded to avoid the inclusion of mixed pixels.
On average more than a 1000 Sentinel 2 scenes and around 300 Sentinel 1 scenes were processed for each AoI to produce monthly synthesis that were used as input to the classification process. A 45-day integration period was used to create monthly synthesis of Sentinel 2 imagery based on interpolated values for the duration of the growing season. Random Forest and TempCNN algorithms were tested, but no substantial improvements were found so far by using a deep learning approach.
In-season crop mask and crop type maps were produced about one month after the completion of the field campaign (so-called in-season mapping) and end-of season maps were produced one month after the end of the growing season. 35 crop types and 10 non-crop land covers were registered. Crop types were regrouped according to the main crop types resulting in a thematic map of 9 to 10 crop type classes.
Independent observations from the field campaign were used to assess the accuracy of the maps produced. The overall accuracies of the crop mask range from 84 to 87% and the crop type maps from 8O to 81% for the end-of-season products with an improvement of 1 to 8% from the in-season products. Some of the main crops’ accuracies are satisfactory with F1 score around 0.6-0.7 in some cases, but some of the crops still exhibits low accuracies mainly due to the very small parcel size (e.g. in Uganda) and mixed cropping patterns.
4. Crop area estimates
Crop area estimates can be derived directly from the field data alone using the so-called direct expansion method as long as the data has been collected based on a probabilistic sample or for which a suitable method can be used to correct any potential bias. Therefore, early area estimates (direct expansion estimators) can be provided as soon as the results from the field campaign have been collated and analysed even before the classification of the satellite imagery.
Thanks to the probabilistic sampling approach, the estimate of proportion (y) of class (c) and its variance can be calculated for each stratum. The total estimate just corresponds to the weighted average of the proportions according to the area covered by each stratum. The standard error for the whole area is then the square root of the sum of the variance times the square of the area for each stratum. However, the confidence interval of the direct expansion estimators is likely to be relatively large. To improve the precision of the estimates, field segment data can be combined with classified satellite imagery. In this latter case (i.e., using the classification map), a so-called regression estimator can be applied and its variance calculated. The estimation of land cover type areas obtained by such procedure can be very variable from pixel counts because image classification is affected by misclassification errors affecting the classes. Area estimates derived from the regression estimator method are corrected from misclassification errors whilst exhibiting a more precise estimate than that of the direct expansion estimate thanks to the complete coverage provided by the image classification.
In summary, direct expansion estimates are unbiased, but suffer from high sampling error, pixel counts from classified satellite imagery are biased but have no sampling errors and the combination of ground data and classified imagery are unbiased and exhibit a reduced sampling error. The efficiency of the regression estimator is estimated by the relative efficiency, which is the ratio of the variance from the regression estimator method and the direct expansion estimate.
Cropland represented between 25 to 30% of the AoIs and the dominant crop was maize in all 3 AoIs ranging from just under 500,000 ha in the AoI in Uganda to over 1.1 million ha in Kenya and Tanzania with a 95% confidence interval of 75,000 ha achieved with the regression estimate in Uganda and 115,000 and 155,000 ha in Kenya and Tanzania, respectively. A relative efficiency equal or greater than 2 was achieved for Maize in all 3 AoIs, meaning that to achieve the same level of uncertainty without the crop type map, twice as many PSUs would have been required. Similar or even better results were obtained for other crops.
5. Conclusions
This study shows that despite the COVID19 crisis it was possible to collect detailed field data following a strict probabilistic sampling approach in Africa and that the data could be used to train the classification of Sentinel imagery and produce reliable crop mask and crop type maps. Accurate crop area estimates could be produced before harvest with already good level of accuracy. The combination of field data with the maps can reduce the amount of field data to be collected to achieve precise crop are estimates and crop type maps can provide useful information on the areas where crops are grown. However, the accuracy of some of the main crop types can be relatively low mainly due to (i) small crop parcels, (ii) mixed cropping patterns and (iii) heterogenous crop stage development across the AoI. In addition, even though the Sentinel 2 synthesis approach appears to be effective, further improvements may be achieved finding synergies with Sentinel 1 and by integrating higher resolution imagery. The results described here can already be visualized in the Copernicus Global Land Hotspots explorer. Both the in situ data collected and the results produced by the presented approach will be made available in a fully free and open way.
Following the global food price hikes in 2007/08 and 2010/11, as part of the Action Plan on Food Price Volatility and Agriculture, the G20 Heads of States endorsed in their 2011 Declaration both the Group on Earth Observations Global Agricultural Monitoring (GEOGLAM) and the Agricultural Market Information System (AMIS), to commit to improve market information and transparency in order to make international markets for agricultural commodities more effective. To that end, the “Agricultural Market Information System” (AMIS) was launched, to improve information on markets, and the “Global Agricultural Geo-monitoring Initiative” (GEOGLAM) was created to coordinate satellite monitoring observation systems in different regions of the world in order to enhance crop production projections and weather forecasting data. After the success of the engagement of the different countries from the G20, especially in the Americas, in 2018 AMA was formally launched. The main goals are to address the gaps in the region related to the use of remote sensing technologies in agriculture. operational work. The AMA is led and coordinated by the GEOGLAM Secretariat with NASA Applied Sciences support. For the participating countries, participation is voluntary and on a best-efforts basis and all contributions are considered in-kind. Most of the methods used by AMA revolve around establishing processes and guidance toward developing capacities to “translate” the science into actionable information that is readily interpretable by a non-technical decision-making audience. AMA works to make available sufficient EO data for member usage. On a regular basis, AMA holds working group teleconferences and also coordinates and organizes regional meetings and training events related to increasing EO usage for agricultural monitoring. The objective of the presentation is to talk about the history of AMA, the current situation, and the challenges ahead in the region.
Meteorological services in developing countries hold significant volumes of station data, the most frequently available being rainfall, with air temperature (maximum and minimum) usually present in smaller numbers. On the other hand, satellite derived rainfall estimates and other satellite indicators on vegetation and land surface temperature are ever more widely available. A few of these estimates (e.g. CHIRPS) already incorporate commonly available raingauge data.
Regular reporting on the evolution of the rainfall season by national meteorological services mostly rely on station data, whether in the form of tables and/or interpolated station data and their anomalies. Usage of satellite derived rainfall information is less common, frequently relying on ready-made products from suppliers such as Fews-net.
An optimum solution for National Meteorological Services is to integrate station data with satellite data to derive blended products that maximize the information available to these institutions and which perform better than any of the individual components in isolation. In the case of rainfall, blending of station data corrects conditional biases in satellite rainfall estimates improving representation of rainfall fields. We present here results for a number of countries showing improved performance from the blended estimates. Similar approaches can be applied for air temperature, using land surface temperature as a background interpolator.
The World Food Program supports National Meteorological Services by strengthening their capacity for early warning and seasonal monitoring. NMS are enabled to access WFP cloud based system for processing of near global EO data (rainfall estimates, MODIS NDVI and LST, snow cover), upload their station data and download blended rainfall products (amounts at a variety of timescales and respective anomalies), as well as blended air temperature data (with MODIS LST), NDVI and non-blended rainfall products (SPI, dry spells, number of rain days).
The availability of these products enables the production of regular improved reports with better identification of drought episodes and extreme rainfall events. A well featured web-platform (PRISM) designed by WFP can be deployed for display and analysis of the satellite and blended products and allows the integration of hazard indicators with socio-economic and food security data.
WFP has also funded data recovery initiatives, that led to the mobilization of 40 years of rainfall and temperature records. Application of the blending algorithms to these data sets leads to the preparation of national databases of gridded rainfall of very high quality, which can be used as reference data in early warning activities, for extensive climatological analysis of rainfall and temperature patterns and for assessment of sectoral climate risks.
Examples from various country systems (Mozambique, Namibia, Zimbabwe, Cuba, Sri Lanka) are presented, illustrating the benefits of well-coordinated collaboration with Meteorological Offices for high quality early warning and seasonal monitoring.
Crop yield forecasting is essential to ensure food security at national and international level. Studies conducted in different parts of the world underline that effective crop yield forecasting requires a thorough understanding of the factors that explain interannual variability of crop yield at regional scale. In large geographical areas such as the European Union (EU), the importance of these factors is likely to differ between regions, due to regional variation in growing conditions. However, this is still poorly understood in the case of forecasting wheat yield, despite the EU’s importance for global wheat production. Therefore, the objective of this study was to assess which environmental variables, derived from satellite and meteorological data, are the main factors that explain interannual variability of wheat yield at regional scale within the EU, and whether the relative importance of these factors differs between EU regions. In addition to differences between regions, we investigated whether the relative importance of these factors differed between months of the growing season.
For reference data, we used regional time series of soft and durum wheat yields, obtained from the national statistical institutes of the EU Member States. Meteorological data and crop biomass indicators were used as explanatory variables. Meteorological data, such as average daily temperature and daily rainfall, were obtained from the JRC-MARS weather database, which provides daily data from station observations interpolated to a 25x25km grid. As indicator of crop biomass, we extracted 10-day composites of NDVI (Normalized Difference Vegetation Index) from MODIS (Moderate-Resolution Imaging Spectroradiometer) imagery, at 250m of spatial resolution. Both meteorological and NDVI variables were spatially averaged to the same administrative regions as in the reference (crop yield) data. The regional time series of the explanatory variables were aggregated to monthly averages to obtain meteorological and remote sensing factors. Each of these was correlated with wheat yield time series through a linear regression approach, and then ranked, region by region, in terms of root mean square error (RMSE). This ranking was performed for every month of the growing season to assess whether the relative importance of the factors changed over time. Next, we used a hierarchical clustering approach to cluster regions with similar behaviour according to the main factors describing their spatial distribution within the EU. Lastly, we analysed the robustness of the main factors of each cluster in terms of whether they accurately predicted wheat yield in years with significant yield loss.
Based on this analysis, we describe, in a consistent way for all the wheat producing regions of the EU, the main factors that explain interannual yield variability of soft and durum wheat at regional scale, and how the relative importance of these factors varies across the growing season. Secondly, with the clustering analysis, we identify regions with similar explanatory patterns that can be considered as a single unit for statistical analysis, thus providing a possible solution for small sample size in regional analysis. Finally, we show that the linear regression approach is not sufficiently robust in years with significant yield loss. These findings provide a valuable baseline for wheat yield forecasting at regional scale in the EU.
NASA Harvest is the NASA’s Food Security and Agriculture program. Its main objective is enhancing the use of satellite data in decision making related to food security and agriculture. Within this context, one of the main priorities is providing valuable information on crop conditions and accurate and timely crop yield forecasts. This work presents the Agriculture Remotely-sensed Yield Algorithm (ARYA), a new EO-based empirical winter wheat yield forecasting model. The algorithm is based on the evolution of the Difference Vegetation Index (DVI) from the Moderate Resolution Imaging Spectroradiometer (MODIS) at 1 km resolution and the Growing Degree Days (GDD) from reanalysis MERRA2 data. Additionally, the model includes a correction on crop stress conditions captured by the accumulated daily difference of the Land Surface Temperature (LST) from MODIS and the air temperature at the MODIS overpass time from MERRA2. The model is calibrated at subnational level using historical yield statistics from 2001 to 2019. In each administrative unit, a different calibration coefficient (based on all possible combination of the three regressors in a linear model) is selected depending on the statistical significance of each variable. The model was applied to forecast the national and subnational winter wheat yield in the United States, Ukraine, Russia, France, Germany, Argentina and Australia (over 70% of global wheat exports) from 2001 to 2019. The results show that ARYA provides yield estimations with 5-15 % (0.3 ± 0.1 t/ha) error at national and 7-20 % (0.6 ± 0,1 t/ha) error at subnational level starting from 2 to 2.5 months prior to harvest.
Additionally, in this work we explore the applicability of ARYA at within-field scale and how high resolution data can help improving ARYA. Therefore, we test the ARYA calibration equations with Sentinel 2 data and evaluate the results to forecast within-field wheat yield measurements from harvester machines over more than 100ha in Valladolid (Spain) during the 2020 and 2021 seasons.
Description :
Φ-lab is delighted to suggest 'Meet the Next GenEO' session (Next Generation in Earth Observation). These young scientists, EO passionate and AI enthusiasts, will be talking about what they do, what are the challenges and rewards of working in the AI4EO field. LPS22 gathers an impressive young and skilled community from diverse EO professional paths (industry, academia, government, and NGOs); therefore, it is a perfect event for networking and an opportunity to invite this young generation to join ESA’s NextGenEO community. An Agora session fits perfectly for the purpose by creating a friendly environment for all the Next GenEO professionals to give a flavour of their AI4EO experience. This session is planned to be shared by ESA’s Next GenEO and non-ESA’s NextGenEO panelists coming from a diverse and inclusive environment (gender, education, and organisation) to discuss together how to propagate AI4EO impact in all professional careers.
Speakers:
Maryam Pourshamsi - Earth Observation Scientist at ESA
Dominika Czyżewska - Remote Sensing Scientist at EUMETSAT
Erwan Rivault - Graphic Designer at BBC News
Dohyung Kim – Programme Specialist at Office of Global Innovation UNICEF NYHQ
Panel:
15:45-15:50 – Rochelle opens the session [in person]
15:50-16:10 – Maryam Pourshamsi [20min, 17 talk + 3 Q&A] [in person]
16:10-16:30 – Dominika Czyzewska [20min, 17 talk + 3 Q&A] [in person]
16:30-16:50 – Erwan Rivault [20min, 17 talk + 3 Q&A] [in person]
16:50-17:10 – Dohyung Kim [20min, 17 talk + 3 Q&A] – [REMOTE] dokim@unicef.org
17:10-17:15 – Rochelle closes the session [in person]
Company-Project:
DLR - CODE-DE
*****FOR THIS SESSION BRING WITH YOU YOUR LAPTOP OR TABLET*****
Description:
• Cloud based Jupyter-Lab instances allow an interactive development environment with a simple data processing interface to hosted data. Its flexible interface allows users to configure and arrange data processing workflows ina convenient manner. The Earth Observation platforms CODE-DE, EOLab and CreoDIAS run public Jupyter Instances, which can be accessed freely by any user. In a practical and interactive classroom training a set of simple data processing steps will be introduced interactively. These include Copernicus data access via S3, data import, sub-setting and image classification.
• The modular design invites users to expand the functionality on their own.
Company-Project:
GMATICS/Superalberi/University of Florence
Description:
• MAFIS has developed EO based biomass mapping in mountain areas by using SAR in C and L bands and leveraging various AI techniques. The new technique can provide timely quantitative information about biomass growth and losses due to natural and man-made disturbances (fires, windstorms, pests, clear and selection cutting, etc.).
• MAFIS is now assessing and quantifying the ecosystem services provided by trees and green areas in the urban environment, in relation to pollution filtering, improvement of thermal comfort, reduction of rainwater runoff, making cities more pleasant and liveable.
• Combining satellite derived information with tree talkers’ data, leveraging also data-cube technology, can improve the understanding of tree health status and derive the suitable treatment and maintenance actions for better tree management and a safer environment for citizens.
Speakers:
-Pietro Maroé
-Saverio Francini
Description:
OpenEO platform aims to simplify scalable EO data processing and analytics in the cloud, abstracting underlying complexities from the user.
This user consultation invites all early adopters and users of OpenEO platform to showcase some community usage examples;
The objective is to gather user experience to understand if openEO platform is responding to community needs in the best possible way. What are shortcomings of OpenEO platform for science, industry and institutions;
Live demos, hands-on workshops and the best ideas award will complement the event;
Description :
The Copernicus Masters 2022 competition is now open for submissions. This international competition awards prizes to innovative solutions, developments and ideas for business and society that use satellite data from the Copernicus programme. Join the Agora event to learn more about this years challenges from ESA, DLR, Portugal Space, BayWa and Up42 and meet the winners from previous editions and learn how to apply!
Speakers:
Ines Kühnert, Head of Innovation Competitions, AZO Anwendungszentrum GmbH Oberpfaffenhofen
Anna Burzykowska, Copernicus Innovation Officer, ESA
Gunter Schreier, Deputy Director German Remote Sensing Data Center (DFD), German Aerospace Center (DLR)
Carolina Sa, Earth Observation Projects Officer, Portugal Space
Alick Chisale Austin, NYASA Team
Tyler Rayner, Orbiter
Description:
Glaciers worldwide are retreating. In Europe, alpine glaciers have become an iconic feature of climate change – the shrinking cryosphere landscape on our doorstep.
Following a screening of the Gorner Glacier expedition – a documentary featuring ESA Astronaut Luca Parmitano – scientists recount their experiences both on and from deep within what is the second biggest ice mass in the Alps and discuss the changes taking place, the impacts and the role of satellite images provided by ESA, such as those coming from Copernicus.
Keynote:
Luca Parmitano: the view of glaciers on Earth and from space (motivation to visit Gorner and introduction to the documentary)
Panelists:
Luca Parmitano (ESA Astronaut)
Frank Paul (University of Zurich )
Anna Maria Trofaier (ESA)
Company-Project:
WASDI Sarl
*****FOR THIS SESSION BRING WITH YOU YOUR LAPTOP OR TABLET*****
Description:
WASDI allows EO-Experts to develop EO Applications in their own environment and deploy it to the cloud with a simple drag and drop. We will introduce the web-platform and programming libraries features. We will follow the tutorial to implement a new sample processor that will be deployed and tested directly online. We will see how to create the automatic user interface of our application. We will discuss also about all the different programming languages supported. The session is supposed to be very interactive: any feedback comment or suggestion will be welcome.
Attendees should have basic knowledge of python.
Company-Project:
Radiant.Earth - ESA EO Training Dataset Platform
*****FOR THIS SESSION BRING WITH YOU YOUR LAPTOP OR TABLET*****
Description:
• The goal of this training is to familiarise participants with ML-ready training data on Radiant MLHub and ways they can use these data to build a ML model. Participants will be guided through signing up for Radiant MLHub, searching and discovering training datasets, and downloading training datasets using the Radiant MLHub Python client. Next, they will examine the dataset to familiarise themselves with the dataset and then train a sample model on the downloaded training dataset. Afterwards, trainers will be available to answer any questions relating to Radiant MLHub or provide guidance on usage.
In order to participate, participants will need to have a jupyter notebook server running locally with the radiant-mlhub and pytorch Python libraries installed.
Participants should consider the following resource in preparation of the classroom:
https://github.com/radiantearth/mlhub-tutorials/blob/main/notebooks/2022%20Living%20Planet%20Symposium%20Classroom%20Training/radiant-mlhub-lps-basic-download-and-training.ipynb
Company-Project:
E-GEOS S.p.A - CLEOS
Description:
• The demonstration offers a live experience of CLEOS platform, showcasing the user/customer journey through CLEOS for searching, configuring and buying geospatial digital services
• During the demonstration, the audience will see CLEOS Marketplace and CLEOS Developer Portal in action, jointly working to solve a Use Case
• CLEOS Markeplace is the digital marketplace to access and buy satellite data and geospatial digital ervices
• CLEOS Developer Portal is the collaborative development environment to build, test and deploy geospatial processing pipelines and manage AI models.
For achieving wind profile coverage at a global scale, the Aeolus satellite equipped with an Atmospheric Laser Doppler Instrument (ALADIN) was launched in 2018 by the European Space Agency (ESA) and has been operated successfully for more than three years now. The wind retrieved by Aeolus is obtained from the Doppler shifted frequency between emitted and detected laser light from Rayleigh scattering from air molecules as well as from Mie scattering from particles (e.g. cloud droplets and ice crystals, dust, aerosols) in the atmosphere (Ingmann and Straume, 2016). The global wind profiles from Aeolus can serve various applications, including further understanding of atmospheric dynamics, improving numerical weather predictions (NWP), tracking air pollutants movement, etc. (ESA, 2020; Banyard et al., 2021; Rennie et al., 2021).
To evaluate the contribution of Aeolus observations to NWP, an experiment with and without Aeolus data assimilation was carried out with the European Centre for Medium-Range Weather Forecasts (ECMWF) model. The results of this so-called observing system experiment (OSE) demonstrated that Aeolus winds are able to improve medium-range vector wind and temperature forecasts, especially over tropical and polar regions (Rennie et al., 2021). However, the impacts on vector wind forecast within the planetary boundary layer (PBL) lack detailed study. Moreover, the applications for wind-related activities and industries, such as wind energy industry, need further scientific investigation. Hence, as a starting point, this study aims at investigating the impact of Aeolus wind assimilation in the ECMWF model on wind forecast within the PBL over Europe. The study is based on a high resolution T639 OSE with 4D variational data assimilation for the June to December 2019 period, i.e., the early FM-B period.
First, we will compare the wind vectors at 10 m, 100 m, 950 hPa and 850 hPa from forecasts of the control experiment (no Aeolus) and the experiment with Aeolus in addition, with the aim to identify the impact of Aeolus on near surface winds for different global regions and forecast ranges. Next, by taking ground-based measurements for reference, including winds from conventional weather stations (Met Office, 2012), buoys (Met Office, 2006) and lidar sites, we will quantify the quality of modelled winds with/without Aeolus data assimilation at different heights, thus determining whether the impact of Aeolus is positive or negative.
The research findings of this study have the potential to provide valuable information for our future work on wind energy applications. Particularly, the prospect of Aeolus winds for wind power prediction as well as offshore wind farm operation and maintenance can be addressed.
Keywords: Aeolus satellite; wind forecast; ECMWF; data assimilation.
Acknowledgement: This study is a part of PhD project Aeolus satellite lidar for wind mapping that is a sub-project of the LIdar Knowledge Europe (LIKE) Innovative Training Network (ITN) Marie Skłodowska-Curie Actions funded by European Union Horizon 2020 (Grant number: 858358). The authors are very thankful for the ECMWF centre for conducting the OSE and providing the data for analysis. The appreciation also goes to the Royal Netherlands Meteorological Institute (KNMI) for hosting PhD student Haichen Zuo for their external research stay during PhD study.
References:
Banyard, T.P., Wright, C.J., Hindley, N.P., Halloran, G., Krisch, I., Kaifler, B. and Hoffmann, L. (2021) ‘Atmospheric Gravity Waves in Aeolus Wind Lidar Observations’, Geophysical Research Letters, 48(10). doi:10.1029/2021GL092756.
ESA (2020) Satellites track unusual Saharan dust plume, The European Space Agency. Available at: https://www.esa.int/Applications/Observing_the_Earth/Satellites_track_unusual_Saharan_dust_plume (Accessed: 28 August 2021).
Ingmann, P. and Straume, A.G. (2016) ‘ADM-Aeolus Mission Requirements Document’. ESA. Available at: https://esamultimedia.esa.int/docs/EarthObservation/ADM-Aeolus_MRD.pdf (Accessed: 22 December 2020).
Met Office (2006) ‘MIDAS: Global Marine Meteorological Observations Data’. NCAS British Atmospheric Data Centre. Available at: https://catalogue.ceda.ac.uk/uuid/77910bcec71c820d4c92f40d3ed3f249 (Accessed: 22 November 2021).
Met Office (2012) ‘Met Office Integrated Data Archive System (MIDAS) Land and Marine Surface Stations Data (1853-current)’. NCAS British Atmospheric Data Centre. Available at: http://catalogue.ceda.ac.uk/uuid/220a65615218d5c9cc9e4785a3234bd0 (Accessed: 21 November 2021).
Rennie, M.P., Isaksen, L., Weiler, F., Kloe, J., Kanitz, T. and Reitebuch, O. (2021) ‘The impact of Aeolus wind retrievals on ECMWF global weather forecasts’, Quarterly Journal of the Royal Meteorological Society, 147(740), pp. 3555–3586. doi:10.1002/qj.4142.
During the last decade, new applications exploiting data from satellite borne lidar measurements demonstrated that these sensors can give valuable information about ocean optical properties. Within this framework, COLOR (CDOM-proxy retrieval from aeOLus ObseRvations) is an on-going (KO: 10/3/2021) 18 month feasibility study approved by ESA within the Aeolus+ Innovation program. COLOR objective is to evaluate and document the feasibility of deriving an in-water AEOLUS prototype product from the analysis of the ocean sub-surface backscattered component of the 355 nm signal.
In fact, although Aeolus’s mission primary objectives and subsequent instrumental and sampling characteristics are not ideal for monitoring ocean sub-surface properties, the unprecedented type of measurements from this mission are expected to contain important and original information in terms of optical properties of the sensed ocean volume. Being the first HSRL (High Spectral Resolution Lidar) launched in space, ALADIN (Atmospheric LAser Doppler Instrument) of ADM-Aeolus gives a new opportunity to investigate the information content of the 355 nm signal backscattered by the ocean sub-surface components. Based on these considerations, COLOR project focuses on the AEOLUS potential retrieval of: 1) Diffuse attenuation coefficient for downwelling irradiance,(Kd [m-1]); 2) Sub-surface hemispheric particulate backscatter coefficient (bbp [m-1]).
To reach COLOR objectives, the work is organized in three phases: Consolidation of the scientific requirements; Implementation and assessment of AEOLUS COLOR prototype product; Scientific roadmap.
The core activity of the project is the characterization of the signal from the AEOLUS ground bin (Δrgrd). In principle, the ground bin backscattered radiation signal is generated by the interaction of the emitted laser pulse radiation with two media (atmosphere and ocean, Bgrd_atm and Bgrd_wat, respectively) and their interface (Bgrd_surf).
To evaluate the feasibility of an AEOLUS in-water product, COLOR proposes to develop a retrieval algorithm that is structured in three independent and consecutive phases:
1) Pre-processing analysis: aimed to identify suitable measurements to be inverted;
2) Estimation of the in-water ground bin signal contribution: aimed to remove contributions to the measured signal from variables other that the in-water ones;
3) Retrieval of in-water ground bin optical properties: aimed to estimate the targeted in-water optical properties.
Two parallel and strongly interacting activities are associated with each step of these phases:
a) Radiative transfer numerical modelling. This tool will be essential to simulate the relevant radiative processes expected to be responsible for the generation of AEOLUS surface bin signal.
b) AEOLUS data analysis. The objective of this activity will be to verify the information content of the AEOLUS ground bin signals and the assumptions for data product retrieval.
The potential AEOLUS in-water product will be then validated through the comparison of statistical properties obtained by analyzing the whole set of data from AEOLUS (at least one year of processed measurements) and the selected reference datasets: Biogeochemical-Argo floats, oceanographic cruises and ocean-colour satellites.
The preliminary results about the above-mentioned activities will be here presented. In particular, the sea-surface backscattering and the in-water contribution of the AEOLUS ground bin have been estimated through numerical modeling. Furthermore, the preliminary experimental data analysis suggests that the observed excess of signal in the AEOLUS ground bin could be related to the signal coming from the marine layers. Analyses are planned in the second phase of these activities to disentangle atmospheric and oceanic signal contribution in the AEOLUS ground bin.
Aeolus is the first Doppler wind lidar (DWL) in space to measure wind profiles. Aeolus is an ESA (European Space Agency) explorer mission with the objective to retrieve winds from the collected atmospheric return signal which is the result of Mie and Rayleigh scattering from the laser emitted light on atmospheric molecules and particulates. The focus of this contribution is on winds retrieved from instrument Mie channel collected data, i.e., originating from Mie scattering of atmospheric aerosols and clouds.
The use of simulated data from Numerical Weather Prediction (NWP) models is a widely accepted and proven concept for the monitoring of the performance of many meteorological instruments, including Aeolus. Continuous monitoring of Aeolus Mie channel winds against ECMWF model winds has revealed systematic errors in retrieved Mie winds. Following a reverse engineering approach the systematic errors could be traced back to imperfections of the data in the calibration tables which serve as input for the on-ground wind processing algorithms.
A new methodology, denoted NWP calibration, makes use of NWP model winds to generate an updated calibration table. It is shown that Mie winds retrieved by making use of the NWP based calibration tables show reduced systematic errors not only when compared to NWP model winds but also when compared to an independent dataset of very high resolution aircraft wind data. The latter gives high confidence that the NWP based calibration methodology does not introduce model related errors into retrieved Aeolus Mie winds. Based on the presented results in this paper the NWP based calibration table, as part of the level-2B wind processing, has become part of the operational processing chain since 1 July 2021.
Clouds play an important role in the energy budget of our planet: optically thick clouds reflect the incoming solar radiation, leading to cooling the Earth, while thinner clouds act as “greenhouse films”, preventing escape of the Earth’s long-wave radiation to space. Cloud response to ongoing greenhouse gases climate warming is the largest source of uncertainty for model-based estimates of climate sensitivity and therefore for predicting the evolution of future climate. Understanding the Earth's energy budget requires knowing the cloud coverage, its vertical distributions and optical properties. Predicting how the Earth climate will evolve requires understanding how these cloud variables respond to climate warming. Documenting how the cloud’s detailed vertical structure evolves on a global scale over the long-term is therefore a necessary step towards understanding and predicting the cloud’s response to climate warming.
Satellite observations have been providing a continuous survey of clouds over the whole globe. Infrared sounders have been observing our planet since 1979. Despite an excellent daily coverage and daytime/nighttime observation capability, the height uncertainty of the cloud products retrieved from the observations performed by these space-borne instruments is large. This precludes the retrieval of the cloud’s vertical profile with the accuracy needed for climate relevant processes and feedback analysis. This drawback does not exist for active sounders, which measure the altitude-resolved profiles of backscattered radiation with an accuracy on the order of 1−100 meters.
All active instruments share the same measuring principle – a short pulse of laser or radar electromagnetic radiation is sent to the atmosphere and the time-resolved backscatter signal is collected by the telescope and is registered in one or several receiver channels. However, the wavelength, pulse energy, pulse repetition frequency (PRF), telescope diameter, orbit, detector, or optical filtering are not the same for any pair of instruments. These differences define the active instruments’ capability of detecting atmospheric aerosols and/or clouds for a given atmospheric situation and observation conditions (day, night, averaging distance). At the same time, there is an obvious need to ensure the continuity of global space-borne lidar measurements (see Fig. 1 for an illustration of currently operating lidars CALIOP and ALADIN and future lidar ATLID). In merging different satellite data, the difficulty is to build a multi-lidar record accurate enough to constrain predictions of how cloud evolve as climate warms.
In this work, we discuss the approach to merging the measurements performed by the relatively young space-borne lidar ALADIN/Aeolus, which has been orbiting the Earth since August 2018 and operating at 355nm wavelength with the measurements performed since 2006 by CALIPSO lidar, which is operating at 532nm and is near the end of its life-time. Even though the primary goal of ALADIN is wind detection, its products include profiles of atmospheric optical properties (aerosols/clouds). As mentioned before, merging the cloud data from a pair of spaceborne lidars is not trivial (see Fig. 2 for differences in observation geometry and local time and consider the differences in wavelength, detector, and measuring techniques).
The planned study consists of the following steps:
(a) developing a cloud layer detection method for ALADIN measurements, which complies with CALIPSO cloud layer detection;
(b) comparing/validating the resulting cloud ALADIN product with the well-established CALIOP/CALIPSO cloud data set;
(c) developing an algorithm for merging the CALIOP and ALADIN cloud datasets;
(d) applying the merging algorithm to CALIOP and ALADIN data and build a continuous cloud profile record;
(e) adapting this approach to future missions (e.g. ATLID/EarthCare).
In the presentation, we show the results of preliminary analysis performed for the first two steps and discuss the future development of this approach.
As part of the Joint Aeolus Tropical Atlantic Campaign (JATAC), radiosondes were launched twice a day from Sal Airport in Cape Verde over a period of 26 days, from 04 to 30 September 2021. Among a total of 38 launches, 10 correspond to Aeolus nearby overpasses. Most of the data were sent to the Global Telecommunication System (GTS) for assimilation in NWP models. The radiosonde temperature, humidity and wind profiles reveal three different dust outbreaks as well as the passage of tropical cyclones that crossed the Sal Island. The 12 radiosonde profiles were vertically aggregated and projected along the horizontal line of sight (HLOS) of Aeolus and compared with the Aeolus measurements, with a collocation criterion ranging from 120 km to 220 km, depending on the orbital node. Error rejection thresholds are identical to those used at ECMWF. The threshold for Mie-cloudy winds is 5 m s−1 and for Rayleigh is 12 m s−1 above 200hPa and 8.6 m s−1 below 200hpa. The radiosonde validation of the Aeolus winds revealed that the quality of the data is closely related to the atmospheric cloud and dust conditions, with the Rayleigh-clear wind values showing larger errors in the presence of aerosols or clouds which can possibly be attributed to the decreasing atmospheric path signal and attenuation effects of clouds/aerosols. Rayleigh winds have a systematic error (bias) of 0.71 m s−1 and a random deviation of 4.48 m s−1 (scaled Mean Absolute Deviation). Mie cloudy winds were more accurate with a systematic error of 0.71 m s−1 and a random deviation of 1.9 m s−1. The statistics obtained from the radiosonde comparisons show lower systematic errors but similar random errors compared to other CAL/VAL studies. Both Rayleigh and Mie channels meet the Aeolus mission requirements of a systematic error less than 0.7 m s−1 , but the random errors are still higher than required.
Optical properties of Californian aging smoke plume retrieved by Aeolus L2A algorithms during long-range transport above Atlantic
Dimitri Trapon¹, Adrien Lacour¹, Alain Dabas¹, Ibrahim Seck¹, Holger Baars², Frithjof Ehlers³, Dorit Huber⁴, ¹CNRM / Meteo France, France, ²Leibniz Institute for Tropospheric Research, Leipzig, Germany, ³ESA-ESTEC, Keplerlaan 1, 2201 AZ Noordwijk, Netherlands; ⁴DoRIT, Munich, Germany.
The ADM-Aeolus satellite from the European Space Agency has been operating since 2018 a Doppler lidar instrument ALADIN (Atmospheric LAser Doppler Instrument). It is the first High Spectral Resolution Lidar (HSRL) operating in the ultraviolet (λ=354,8 nm) in space. Despite being designed for wind profile measurement the Rayleigh and Mie channels of ALADIN can be used to directly retrieve the particle only co-polar extinction and backscatter coefficients. Elevated aerosol layers as Saharan Air Layer (SAL), Polar Stratospheric Cloud (PSC) or Biomass Burning smoke (BB) can then be observed using the L2A Aerosol and Optical Properties product.
In early September 2020, massive smoke plumes from Californian wildfires were transported east across the United States and the Atlantic ocean and observed by various instruments such as Copernicus Sentinel-5p TROPOMI. Smoke residuals were also compared to ground based lidar above western Europe confirming the long range transport above the Atlantic. Aeolus observed directly the Californian smoke through several orbits over a week. The presentation will show the output of the main L2A algorithms and underline observations made on the smoke plume optical characteristics. The co-polar extinction and backscatter coefficients calculated by the Standard Correct Algorithm (SCA) [1] are then analysed in parallel with denoised retrievals given by a newly developed scheme based on physically constrained minimization named Maximum Likelihood Estimation (MLE) [2]. The attenuated backscatter for particles will also be illustrated and compared to the NASA CALIPSO CALIOP product as the depolarization ratio illustrating the role of Black Carbon (BC) and Ice Nucleating Particles (INPs) in fresh smoke as the plume reaches the top troposphere and get contaminated by ice crystals and water droplets.
[1] Flament, T., et al., Aeolus L2A Aerosol Optical Properties Product: Standard Correct Algorithm and Mie Correct Algorithm, Atmos. Meas. Tech. Discuss. [preprint], https://doi.org/10.5194/amt-2021-181, in review, 2021.
[2] Ehlers, F, et al., Optimization of Aeolus Optical Properties Products by Maximum-Likelihood Estimation, Atmos. Meas. Tech. Discuss. [preprint], https://doi.org/10.5194/amt-2021-212, in review, 2021.
During the first three years of the Aeolus mission, the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt e.V., DLR) prepared, implemented and executed four airborne validation campaigns with the support from ESA. After having performed three campaigns in Central Europe and the North Atlantic region around Iceland in 2018 and 2019, DLR carried out the Aeolus VAlidation Through Airborne LidaRs in the Tropics (AVATART) on Sal Island, Cape Verde, in September 2021, as a part of the Joint Aeolus Tropical Atlantic Campaign (JATAC). Already in the 10 years before launch, various ground and airborne validation campaigns had been performed in support of the Aeolus operations and processor development.
These campaigns deployed two lidar instruments: a scanning, high-accuracy coherent Doppler wind lidar (DWL) system working at 2-µm wavelength which acted as a reference by providing wind vector profiles for the Aeolus validation, and the ALADIN Airborne Demonstrator (A2D). Being the prototype of the direct-detection DWL ALADIN (Atmospheric LAser Doppler INstrument) on-board Aeolus, the A2D also consists of a frequency-stabilised ultra-violet laser, a Cassegrain telescope and the same dual-channel optical and electronical receiver design to measure wind speeds along the instrument’s line-of-sight by analysing particulate (Mie) and molecular (Rayleigh) backscatter signals. The combination of both DWLs enabled to explore the ALADIN-specific wind measurement technology under various atmospheric backscatter signal conditions. For the airborne measurements the two instruments were operated in parallel on the DLR Falcon research aircraft in a downward looking configuration.
In the framework of the post-launch airborne validation campaigns, in total 190 flight hours were spent to cover a distance of 26,000 km along the Aeolus track during 31 coordinated underflights in different geographical regions and operational states of the mission. In the tropics, where Aeolus measurements are especially important for improving the numerical weather prediction, AVATART contributed 11 flights, covering nearly 11,000 km of the satellite measurement swath around the Cape Verde archipelago. The latter was chosen as a base, as it allowed to observe tropical dynamics of the Saharan air layer, the African Easterly Jet, the Subtropical Jet and the Intertropical Convergence Zone. In the context of JATAC, the campaign aimed to study the impact of aerosol in the atmosphere on the operational Rayleigh and Mie wind products of Aeolus, specifically the potential errors that arise from crosstalk between the two complimentary receiver channels.
Thanks to the high degree of commonality with the satellite instrument in terms of design and measurement principle, the collocated A2D and 2-µm wind observations acquired during the campaigns provide valuable information on the optimization of the Aeolus wind retrieval and related quality control algorithms. For example, during JATAC the A2D, unlike ALADIN, delivered a broad vertical and horizontal coverage of Mie winds across the Saharan air layer, whereas A2D Rayleigh winds measured in this region, which are affected by Mie contamination through crosstalk, are effectively filtered out. Hence, refinement of the Aeolus wind processor based on the example of the A2D wind retrieval may improve the Aeolus wind data coverage and accuracy.
The paper gives an overview of mission relevant results obtained from pre-launch campaigns to comparative wind observations of Aeolus and the DLR airborne DWLs with a focus on recent findings. Seeing Aeolus-2 currently prepared as a satellite proposal by ESA and EUMETSAT on the horizon, an again successful mission support with an airborne demonstrator can build on this heritage and an accordingly modified 2nd generation A2D.
The convective system (CS) is one of the main dynamical meteorological features over the tropical and subtropical zones. Some CS types, including mesoscale CSs and supercell convective storms, may be classified as natural hazards since they produce extreme weather events like heavy rainfall, strong surface wind speed, and intense lightning. These hazards can significantly impact human life and economic activities. Heavy rain in a short time over some regions may cause severe flash floods, while intense lightning may kill people and damage infrastructures. Likewise, strong surface wind speed may significantly impact many onshore, offshore, and coastal activities such as energy production (more and more related to wind power), marine transportation, and all type of aeronautic activities.
The observation and now-cast of deep convection have been significantly improved in recent years, thanks to the GEOstationary (GEO) satellites, including Meteosat, GEOS, Himawari, and Gaofen, covering Europe, Africa, America, and Asia-Pacific, respectively. In particular, the new GEO generation can image large regions at a high time and space resolution (e.g., with 0.5-2 km pixel spacing for every 5 minutes). However, the observation and prediction of the extreme weather events associated with deep convection is still a big challenge due to the lack of additional in-situ and/or remote sensing data to describe the CS dynamics. For instance, this is particularly significant over the Gulf of Guinea, contrary to the Gulf of Mexico, where there are very few moored buoys, radio-soundings, or weather stations to observe CS vertical dynamics associated with the observed surface convective wind gusts.
Some previous studies [1-3] indicated that the collocation of GEOstationary and Low-Earth Orbit (LEO) satellite data could observe deep convective clouds and the associated vertical and horizontal dynamics. Figure 1 illustrates the observation of a deep convective cloud at the tropopause altitude by Meteosat GEO, intense downdrafts at the mid-levels (respective updrafts to balance the CS internal dynamics) by Aeolus Lidar LEO, and a surface wind pattern at the sea surface by Sentinel-1 (Synthetic Aperture Radar) LEO satellites. The results in [2-3] showed that the three features observed by Meteosat, Aeolus, and Sentinel-1 matched in location and observation time. In particular, the wind hot spots (15-25 m/s) correspond to the coldest cloud patterns (200-210 K brightness temperature) and intense downdrafts.
The collocation of GEO and LEO satellites offers a significant advantage for a deeper understanding of the relationship between deep convective clouds and dynamics within the ICTZ area, particularly over the Gulf of Guinea or in the middle Tropical Atlantic, where there are few observations. In addition, such collocation should lead to combining GEO and LEO data in an associated mode to be assimilated as a single feature rather than a collection of satellite data within Numerical Weather Prediction (NWP) models.
Moreover, GEO images may be used as input data combined with Machine Learning / Deep Learning to predict surface wind gusts. The LEO data, including Sentinel-1, SMAP, ASCAT, Windsat, Aeolus, etc., should be then used for training and validating learning models.
[1] T. V. La, C. Messager, M. Honnorat, R. Sahl, A. Khenchaf, C. Channelliere, and P. Lattes, “Use of Sentinel-1 C-Band SAR Images for Convective System Surface Wind Pattern Detection,” J. Appl. Meteor. Climatol., vol. 59, no. 8, pp. 1321–1332, Aug. 2020.
[2] T. V. La and C. Messager, “Convective system dynamics viewed in 3D over the oceans,” Geophysical Research Letters, vol. 48, pp. e2021GL092397, Feb. 2021.
[3] T. V. La and C. Messager, "Convective System Observations by LEO and GEO Satellites in Combination," IEEE J STARS, doi: 10.1109/JSTARS.2021.3127401.
The resolution of regional numerical weather prediction (NWP) models has continuously been increased over the past decades, in part, thanks to the improved computational capabilities. At such small scales, the fast weather evolution is driven by wind rather than by temperature and pressure. Over the ocean, where global NWP models are not able to resolve wind scales below 100-150 km, regional models provide wind dynamics and variance equivalent to 25 km or lower. However, although this variance is realistic, it often results in spurious circulation (e.g., moist convection systems), thus misleading weather forecasts and interpretation. An accurate and consistent initialization of the evolution of the 3 dimensional (3-D) wind structure is therefore essential in regional weather analysis. The present study is carried out in the framework of the EUMETSAT research fellowship project entitled WIND-4D, which focuses on a comprehensive characterization of the spatial scales and measurement errors for the different operational space-borne wind products currently used and/or planned to be used in regional models. In addition, a thorough investigation and improvement of the 4-D (including time) consistency between different horizontal and/or vertical satellite wind products will be carried out. Such products include the Ocean and Sea Ice Satellite Application Facility (OSI SAF) scatterometer-derived sea-surface wind fields, the Nowcasting and Very Short-Range Forecasting (NWC) SAF Atmospheric Motion Vectors (AMVs), Aeolus and/or Infrared Atmospheric Sounding Interferometer (IASI) wind profiles. Densely sampled aircraft wind profiles (Mode-S) will be used to verify and characterize the satellite products. Moreover, data assimilation experiments of the consistent datasets into the HARMONIE-AROME regional model will be carried out in two different regions, i.e., the Netherlands and the Iberian Peninsula regional configurations.
Regarding the characterization of the spatial scales and measurement errors, the widely used triple collocation (TC) analysis is further developed and adapted for the purpose of this project.
After testing the TC analysis on surface winds using scatterometer measurements over the ocean, buoy observations and NWP output, we extend the analysis to vertical wind profiles, more specifically to Aeolus, Mode-S and NWP output. Aeolus winds are collocated with Mode-S observations and ECMWF model output during a period of 6 months over the Mode-S domain in Western Europe. The spectral integration method and the spatial variances method are used to estimate the representativeness errors of the collocated data sets. The TC analysis is then exploited to characterize the errors of the different sources at different scales. The analysis is performed at different altitudes for both the Mie and the Rayleigh channels.
Finally, experimental 4D wind data from IASI are analyzed to evaluate its possible use as an additional source of wind observations for regional data assimilation.
Improvements in recent years to aerosol observation have enabled some weather forecasting centres to offer projections of aerosol fields up to five days in advance. These data, primarily aerosol optical depth (AOD) measurements, are used at ECMWF in the Copernicus Atmospheric Monitoring Service (CAMS) for atmospheric composition forecasts.
Recent efforts in the research of aerosol assimilation have shown that lidar backscatter data has potential for augmenting AOD information. Lidar backscatter can allow profiling of aerosols, leading to improvements in identifying the vertical structure of aerosol fields, allowing a better forecast of the plume. The ALADIN instrument onboard ESA’s Aeolus mission was primarily designed for measuring wind data, not for aerosol observations. Despite not being optimised for aerosol science, the assimilation of particle backscatter is possible and with some screening the information on aerosol vertical profile can be gleaned through post-processing. This extraction of information on aerosols additionally serves as precursor work for the EarthCARE mission, which, like Aeolus, will host a UV-wavelength lidar operating at 355nm, called ATLID. Unlike ALADIN, the design of ATLID is optimised to provide vertical profiles of aerosols and thin clouds. ATLID will also be equipped with a depolarisation channel, further strengthening the ability to return information on aerosol vertical structure.
This talk will present the latest work from ECMWF’s contribution to the Aeolus Aerosol Assimilation in the DISC (A3D) contract. This project is a continuation to previous work carried out at ECMWF, the A3S project, which successfully demonstrated the feasibility of assimilating the lidar backscatter signal at 355nm using demonstration datasets. Here will be discussed findings on the quality of Aeolus L2A particle backscatter data in the framework of atmospheric aerosol monitoring (e.g. such as that provided by CAMS) and assimilation into composition forecasting experiments. The verification of these experiments with independently measured lidar profiles and other aerosol observations, such as with the ground based AERONET AOD, will be presented.
VirES for Aeolus (https://aeolus.services) provides a highly interactive data manipulation and retrieval web interface for the official Aeolus data products. It includes multi-dimensional visualization, interactive plotting, and analysis. VirES stands for a modern concept of extended access to Earth Observation (EO) data. It supports novel ways of data discovery, visualization, filtering, selection, analysis, snapshotting, and downloading.
The service has been operated for ESA by EOX since the satellite launch in 2018 providing easy insight and analysis of the data. During this time the service has been evolving based on user feedback and keeping up with the science done around the mission.
Although the VirES service provides great insight into the data and has been welcomed by the scientific community around the Aeolus mission, it has become apparent that an additional more flexible environment for sophisticated data interaction would be beneficial. The environment would allow higher level data manipulation and help the community to work and collaborate on the implementation of algorithms to further exploit the data from the Aeolus mission.
The VirES Service will be extended with a Virtual Research Environment (VRE) (https://vre.aeolus.services) beginning of 2022 in order to provide this new data manipulation capabilities to users. In order to prepare the release, requirements and expectations from potential users were already collected during an initial design phase. This helped to adapt the VRE to the individual user demands.
In order to allow users to achieve the full potential of the VRE it is important to provide an extensive set of examples and documentation (https://notebooks.aeolus.services). To help in this activity, as well as to support during the design phase a scientific partner team has been established with LMU and DLR.
The objective of the VirES for Aeolus service, now including the VRE, is to simplify working with Aeolus data so that people unfamiliar with the mission can easier and quicker work with the data as well as to provide powerful tools to already experienced users.
This poster will present the current status of the VirES/VRE project, its design and outlook for 2022 when it will enter its operational phase.
The JATAC campaign in September 2021 on and above Cape Verde Islands has resulted in a large dataset of in-situ and remote measurements. In addition to the calibration/validation of the ESA’s Aeolus ALADIN during the campaign, the campaign featured also secondary scientific objectives related to climate change. The atmosphere above the Atlantic Ocean off the coast of West Africa is ideal for the study of the Saharan Aerosol layer (SAL), the long-range transport of dust, and the regional influence of SAL aerosols on the climate.
We have instrumented a light aircraft (Advantic WT-10) with instrumentation for the in-situ aerosol characterization. Ten flights were conducted over the Atlantic Ocean up to over 3000 m above sea level during two intense dust transport events. Airborne measurements were supported by the ground-based long-term deployment of PollyXT, EvE and Halo lidars and the AERONET sun photometer. The lidars were used to plan the flights in great detail extending the WRF-Chem and CAMS dedicated numerical weather and dust simulations used for the forecasting.
The particle light absorption coefficient was determined at three different wavelengths with Continuous Light Absorption Photometers (CLAP). They were calibrated with the dual wavelength photo-thermal interferometric measurement of the aerosol light-absorption coefficient in the laboratory. The particle size distributions above 0.3 µm diameter were measured with two Grimm 11-D Optical Particle Size Spectrometers (OPSS). These measurements were conducted separately for the fine aerosol fraction and the enriched coarse fraction using an isokinetic inlet and a pseudo-virtual impactor, respectively.
The aerosol light scattering and backscattering coefficients were measured with an Ecotech Aurora 4000 nephelometer. The instrument used a separate isokinetic inlet and was calibrated prior to and its calibration validated after the campaign with CO2. We have measured the total and diffuse solar irradiance with a DeltaT SPN1 pyranometer. CO2 concentration, temperature, aircraft GPS position altitude, air and ground speed were also measured.
The first event in the beginning of the campaign proved to be a very homogeneous Saharan dust layer in space (horizontally and vertically) and time. The second event towards the end of the campaign featured strong horizontal gradients in aerosol composition and concentration, and layering in the vertical direction. These layers often less than 100 m thick, separated by layers of air with no dust.
Complex mixtures of aerosols in the outflow of Saharan dust over the Atlantic Ocean in the tropics will be characterized. We will show the in-situ atmospheric heating/cooling rate and provide insight into the regional and local effects of this heating of the dust layers. These measurements will support of the research on evolution, dynamics, and predictability of tropical weather systems and provide input into and verification of the climate models.
The Joint Aeolus – Tropical Atlantic Campaign (JATAC) has been finally performed in summer/autumn 2021 at the Cabo Verdean Islands. Next to an impressive airborne fleet situated on the island of SAL/Cabo Verde, intense ground-based and airborne is-situ measurements took place on and above Mindelo on the island of São Vicente /Cabo Verde.
After a dedicated orbit change in June 2021, the measurements of ESAs Aeolus satellite were directly performed over Mindelo on each Friday evening providing one prime objective for the research activities. Furthermore, the campaign is dedicated to science studies for, e.g., the EarthCARE and WIVERN mission.
At the Ocean Science Center in Mindelo (OSCM), a full ACTRIS remote sensing super site was set up with instrumentation from different institutions since June 2021. The instrumentation includes a multiwavelength-Raman-polarization lidar PollyXT, an AERONET sun photometer, a Scanning Doppler wind lidar, a microwave radiometer and a cloud radar belonging to ESA fiducial reference network (FRM4Radar). Next to these aerosol, cloud, and wind remote sensing facilities, ESA’s novel reference lidar system EVE, a combined linear/circular polarization lidar system with Raman capabilities, was deployed. It can mimic the observations of the space-borne lidar ALADIN onboard AEOLUS. On top of these ground-based equipment, a light-weight airplane was located at the airport of São Vicente during the intensive campaign in September 2021. It was performing in-situ measurements of the aerosol layers around the island up to an altitude of about 3 km.
During this intensive period in September 2021, very different aerosol conditions were observed above and around Mindelo. Usually, the marine boundary layer up to an altitude of about 1 km was topped by a layer of Saharan dust reaching up to 6 km altitude. The amount and height of the Saharan dust aerosol varied during the 3-weeks campaign, providing a wide variety of aerosol conditions. Finally, volcanic aerosol from the la Palma volcano was observed on São Vicente island in the local boundary layer and partly above.
In this presentation, we want to present first results concerning the validation of the Aeolus products as well as closure studies concerning the aerosol properties around the island of Sao Vicente.
Aeolus aerosol products have been intensively validated for the direct overpasses with the Aeolus-reference system EVE and the PollyXT lidar, allowing to understand also polarization effects of oriented particles and the wavelength behaviour of the observe particles. The inter comparison between the ground-based lidars yielded an excellent agreement, giving confidence that they can act as ground-truth for Aeolus. The first comparisons to the Aerosol products of Aeolus confirmed the finding, that the backscatter coefficient can be well retrieved in the Saharan dust layer. However, due to the low signal return, the lowermost aerosol layers below 2-3 km could partly not be resolved with the current operational algorithm suite.
Additionally, wind observations from the ground-based scanning Doppler lidar will be used to validate the Aeolus wind products above the island. Of special interest will be the investigation, if Aeolus is able to detect so-called Mie winds in the dense Saharan dust layers.
The airborne-situ measurements revealed, that in the beginning of the campaign, the Saharan dust layer was very homogenous in space (horizontally and vertically) and time, while towards the end of the campaign strong horizontal and vertical gradients in aerosol composition and concentration could be found.
As a next step, closure studies between the airborne in-situ and the ground-based measurements will be performed which will lead to an intensive insight in the microphysical aerosol properties. These studies will help to understand the representativeness of the ground-based supersite in the context of the regional aerosol distribution. The results will thus give valuable information for validation activities for Aeolus, but also other missions like EarthCARE, in a region of the world where measurements are sparse.
In this context, another intensive ASKOS campaign is planned for spring/summer 2022 on the São Vicente Island, comprising a bigger instrument suite and covering the prime Saharan dust outbreak season.
The ASKOS team:
Holger Baars(1), Eleni Marinou(2), Peristera Paschou(2), Griša Močnik(3), Nikos Siomos(2), Ronny Engelmann(1), Annett Skupin(1), Johannes Bühl(1), Razvan Pirloaga(4), Cordula Zenk(5),(7), Samira Moussa Idrissa(6), Daniel Tetteh Quaye(6), Desire Degbe Fiogbe Attannon(6), Eder Silva(7), Elizandro Rodrigues(7), Pericles Silva(7), Sofia Gómez Maqueo Anaya(1), Henriette Gebauer(1), Martin Radenz(1), Moritz Haarig(1), Athina Floutsi(1), Albert Ansmann(1), Bogdan Antonescu(4), Dragos Ene(4), Lukas Pfitzenmaier(8), Ewan O’ Connor(9), Patric Seifert(1), Ioanna Mavropoulou(2), Thanasis Georgiou(2), Christos Spirou(2), Eleni Drakaki(2), Anna Kampouri(2), Ioanna Tsikoudi(2), Antonis Gkikas(2), Emmanouil Proestakis(2), Luke Jones(10), Luka Drinovec(3), Uroš Jagodič(11), Blaž Žibert(11), Matevž Lenarčič(12), Anca Nemuc(4), Birgit Heese(1), Dietrich Althausen(1), Angela Benedetti(10), Ulla Wandinger(1), Doina Nicolae(4), Pavlos Kollias(2), Vassilis Amiridis(2), Rob Koopman(13), Jonas Von Bismarck(13), Thorsten Fehr(14).
The ASKOS institutions:
1 Leibniz Institute for Tropospheric Research (TROPOS), Leipzig, Germany
2 National Observatory of Athens (NOA), Athens, Greece
3 University of Nova Gorica, Ajdovščina, Slovenia
4 National Institute of Research & Development for Optoelectronics, INOE, Magurele, Romania
5 GEOMAR Helmholtz Centre for Ocean Research Kiel, Kiel, Germany
6 Atlantic Technical University (UTA), Cape Verde
7 Ocean Science Centre Mindelo (OSCM), Mindelo, Cape Verde
8University of Cologne, Cologne, Germany
9Finnish Meteorological Institute (FMI), Finland
10European Center of Medium-Range Weather Forecast (ECMWF), Reading, UK
11 Haze Instruments d.o.o., Ljubljana, Slovenia
12 Aerovizija d.o.o., Vojsko, Slovenia
13 European Space Agency (ESA-ESRIN), Frascati, Italy
14 European Space Agency (ESA), Noordwijk, The Netherlands
The Atmospheric Laser Doppler Instrument (ALADIN) onboard Aeolus is the world’s first space-based Doppler wind lidar to acquire global wind profiles. ALADIN operates at 355 nm and its design is optimized for wind observations, however, cloud and aerosol information can also be retrieved from the attenuated backscatter signals. Using a variation of the High Spectral Resolution Lidar technique (HSRL), two main detection channels are used, a `Mie ‘-channel and a `Rayleigh’-channel. ATLID (Atmospheric Lidar) is the lidar to be embarked on the Earth Clouds and Radiation Explorer (EarthCARE) mission. ATLID is a HSRL systems which is optimized exclusively for cloud and aerosol observations.
Even though ALADIN has a lower spatial resolution, lower signal to noise ratio (SNR), and no depolarization channel in comparison to the ATLID instrument , we can still adapt the ATLID L2 retrieval algorithms developed for the EarthCARE mission, the ATLID feature mask (A-FM) and ATLID profile retrieval (A-PRO) algorithms, to the ALADIN data. The algorithms are being implemented in the operational Aeolus L2A processor (called AEL-FM and AEL-PRO). AEL-FM and AEL-PRO are focused on the challenge of making accurate retrievals of cloud and aerosol extinction and backscatter profiles specifically addressing the low SNR nature of the lidar signals and the need for intelligent binning/averaging of the data. AEL-FM and AEL-PRO use the attenuated Mie and Rayleigh backscatter signals derived only from the Mie spectrometer measurements. Therefore, we also developed an algorithm to calibrate the Mie and Rayleigh signals and perform the cross talk correction.
We have tested AEL-FM and AEL-PRO using Aeolus L1b data for a large number of orbits. So far the AEL-PRO extinction profiles have been compared to CALIPSO retrievals for biomass burning and dust aerosol cases. In this presentation, we will focus on the AEL-FM and AEL-PRO products and the comparison to CALIPSO and/or AERONET data for various aerosol, cloud and Polar Stratospheric Cloud cases.
Australian “Black Summer” megafires have resulted in an unprecedented and persistent perturbation of stratospheric aerosol and gaseous composition, radiative balance and dynamical circulation (Khaykin et al., 2020). One of the most striking repercussions of this event was the generation of a synoptic-scale anticyclone that formed around a massive cloud of smoke in the stratosphere and persisted for 3 months. This phenomenon, termed the Smoke-Charged Vortex (SCV) acted to confine the fire plume, maintaining absorptive smoke aerosols at high concentration, which led to a rapid solar-driven ascent of combustion products up to 35 km altitude.
The SCV anticyclone was identified by the ECMWF Integrated Forecasting System (IFS) through assimilation of satellite temperature profiling, in particular the GNSS radio occultations (RO). Since the SCV occurrence was largely limited to the southern extratropics, where the meteorological radiosounding network is particularly sparse, there exist very few observations of wind velocity inside the anticyclone. The ESA Aeolus space-borne Doppler lidar is a unique sensor to provide the direct measurements of this atmospheric phenomenon at full scale.
Here we present the Aeolus observational perspective on the SCV during its early stage using L2B Rayleigh and Mie wind profiling compared to ECMWF ERA5 and IFS (re)analysis. We also use the Aeolus L2A cloud/aerosol product to identify the associated smoke cloud in comparison with collocated CALIPSO satellite lidar observations. By analyzing the wind and temperature variances derived from Aeolus and GNSS-RO respectively, we put in evidence and discuss the generation of gravity waves by the SCV anticyclone and their vertical propagation.
References
Khaykin S., B. Legras, S. Bucci, P. Sellitto, L. Isaksen, F. Tence, S. Bekki, A. Bourassa, L. Rieger, D. Zawada, J. Jumelet, S. Godin-Beekmann.: The 2019/20 Australian wildfires generated a persistent smoke-charged vortex rising up to 35 km altitude. Nat. Comm. Earth Environ. 1, pp. 22, 2020
With Aeolus now in its fourth year of successful operation, valuable wind measurement data is still being provided by its instrument ALADIN (Atmospheric LAser Doppler Instrument) to the Global Observing System (GOS) with a significant positive impact on numerical weather prediction (NWP). This important contribution throughout the mission was made possible by continuous improvements to the data processors in updated baselines that accompanied the entire mission, leading, for example, to the implementation of the essential bias correction scheme. These upgrades were based on a continuous validation of the Aeolus measurements using NWP model data or reference instrument measurements to determine their systematic and random errors.
In order to monitor the changes in the various processor updates and their influence on the data processed with these new processing baselines, the radar wind profiler network of the German weather service makes an important contribution specifically for the region of Germany.
The network consisting of four UHF radar wind profilers operated at 482 MHz provides wind observations in clear air as well as in particle-laden regions up to 16 km altitude with high accuracy on a 24/7 basis. Covering six weekly Aeolus orbits the four sites measure enough data for creating long-term statistics of Aeolus observation biases and random errors revealing also possible instrument degradations.
While this performance monitoring is based on operational near real-time (NRT) data from Aeolus, the radar wind profiler measurements are also used to analyze reprocessed data sets and their improvements compared to NRT data. This provides important insights for future reprocessing to maximize the quality of Aeolus measurements based on further processor improvements.
Since these newly processed data sets represent a homogeneous data set, additional investigations on the dependence of the bias on e.g., height, range bin thickness and wind speed were performed. By processing with the same baseline, influences of different processor versions on the data quality can be excluded.
The presented work gives an overview of the long-term validation of Aeolus wind measurements above Germany based on radar wind profiler observations. The analysis of systematic and random errors of the entire mission as well as comparisons between the operational and reprocessed data sets are shown.
Background
Aquatic land cover represents the land cover type that is significantly influenced by the presence of water over an extensive period of a year. Monitoring Global Aquatic Land Cover (GALC) types plays an essential role in preserving aquatic ecosystems and maintaining the ecosystem service they provide for humans. Currently, a number of GALC datasets have been produced thanks to the availability of free and open Earth Observation (EO) data and cloud-computing platforms. However, map users are confronted with prominent inconsistencies and uncertainties when applying existing GALC datasets in different fields of research (e.g. climate modelling, biodiversity conservation) due to the lack of a uniform and applicable aquatic land cover characterization framework. In addition, as aquatic ecosystems are complex and dynamic in nature, the sustainable management of aquatic resources requires spatially explicit information on both the vegetation types and water presence. However, previous GALC mapping has been focused on water bodies, and an up-to-date and thematically detailed GALC product characterizing the water and vegetation collectively is still lacking.
Objectives
In this study, our main objectives are:
1) Developing a comprehensive aquatic land cover characterization framework that not only ensures the consistency in GALC mapping but also serves the needs of multiple users (e.g. climate users, sustainable water resource management users) interested in different aspects of aquatic lands.
2) Assessing the applicability of the proposed framework by developing a prototype GALC database based on existing datasets, and identifying the gaps of current datasets in GALC mapping.
3) Improving the global mapping of various aquatic land cover types by exploiting multi-source EO data.
Methodology
To better understand the user needs, we reviewed 33 existing GALC datasets (Xu et al. 2020). The major user groups and user requirements were identified from the citing papers of these datasets and international conventions (e.g., Ramsar Convention), policies (e.g., Sustainable Development Goals), and agreements (e.g., Paris Agreement) in relation to aquatic ecosystems. Based on the identified user needs and the United Nations Land Cover Classification System (LCCS, Di Gregorio 2005), a new GALC characterization framework was formulated.
Then, eight out of the reviewed 33 GALC datasets were harmonized and integrated to construct a prototype GALC database for the year 2015 at a 100m spatial resolution conforming with the proposed GALC characterization framework (Xu et al. 2021). By performing an independent validation on the prototype database, the limitations of current datasets towards GALC mapping were systematically analyzed. To demonstrate the applicability of the prototype GALC database, potential use cases were discussed using maps provided by the database.
Finally, making use of the reference dataset provided by the Copernicus Global Land Service Land Cover map at 100m (CGLS-LC100) project as well as multi-source EO data including optical (e.g., Sentinel-2), Synthetic Aperture Radar (SAR, e.g., Sentinel-1 and ALOS/PALSAR), and various ancillary datasets (e.g., Global Ecosystem Dynamics Investigation (GEDI) forest height, climate, topographic, and soil), an improved mapping of global aquatic land cover types was conducted on the Google Earth Engine (GEE) platform.
Results and discussions
Our literature review showed that users of GALC datasets require a multitude of water-related information such as the water persistence and vegetation type, while none of the current datasets could provide such comprehensive information. Based on the user needs and the ISO-certified LCCS, the proposed GALC characterization framework comprised three levels, of which Level-1 identified aquatic land cover as a whole, representing the discrimination of aquatic and non-aquatic lands. At Level-2, five classifiers were adopted: the persistence of water - the duration of water covering the surface; the presence of vegetation - the existence or absence of vegetation; the artificiality of cover - whether or not a land cover is managed by humans; the accessibility to the sea - the distance to the ocean; and the water salinity - the concentration of Total Dissolved Solids (TDS). At Level-3, vegetated and non-vegetated types were further specified into more detailed classes by the life form classifier. This level-by-level and classifier-by-classifier design is flexible enough to allow users to adapt the framework for their specific applications.
The created prototype GALC database for 2015 included six maps at three levels at a 100m-resolution (Figure 1). Our independent and quantitative accuracy assessment showed that the Level-1 map tended to overestimate the general extent of global aquatic land cover. The Level-2 maps were good at characterizing permanently flooded areas and natural aquatic types, while accuracies were poor in mapping temporarily flooded and waterlogged areas as well as artificial aquatic types. The Level-3 maps could not sufficiently characterize the detailed life form types (e.g., trees, shrubs) for aquatic land cover. However, the prototype GALC database was flexible to derive user-oriented maps for hydrological or climate modelling and global land change monitoring.
Based on the feature combination derived from Sentinel-1, Sentinel-2, ALOS/PALSAR mosaic, and ancillary datasets, our best classification model achieved an overall accuracy of 83.2% in mapping global aquatic land cover. The spaceborne satellite optical and SAR data played a key role in characterizing various aquatic land cover types, of which optical features provided by Sentinel-2 imagery were of higher importance than other data. Sentinel-1 SAR data and the ALOS/PALSAR mosaic exhibited remarkable potential in improving the identification of short vegetation (e.g., herbaceous cover) and trees in aquatic areas. Ancillary datasets such as the GEDI forest canopy height dataset and soil data improved the mapping of trees and bare/sparsely vegetated aquatic lands, respectively.
The LCCS-based GALC mapping framework proposed in this study can help to standardize the way describing aquatic land cover, and thus it is promising in bridging the gap between user needs and various GALC datasets. The interaction among water, vegetation, and wet soils makes aquatic land cover types more difficult to characterize using the same way applied to general Global Land Cover (GLC) mapping, most of which used single-sensor satellite data (ESA, 2017). The recently released 10m-resolution WorldCover 2020 GLC product (Zanaga et al., 2021) was created based on both Sentinel-1 and Sentinel-2 data, while some aquatic areas were reported to be mapped with low accuracies. Our research represents an important step forward in the high-resolution and more accurate global mapping of comprehensive aquatic land cover types. With evolving earth observation opportunities such as the launching of the BIOMASS mission, which will carry a full polarimetric P-band SAR, limitations in the current GALC characterization can be addressed in the future.
Keywords: Aquatic land cover, Global mapping, Characterization framework, 10m-resolution, Multi-source EO data.
References
Di Gregorio, A., 2005. Land cover classification system: classification concepts and user manual: LCCS. Food & Agriculture Organization.
ESA (2017). Land Cover CCI Product User Guide Version 2. Available online: https://maps.elie.ucl.ac.be/CCI/viewer/download/ESACCI-LC-Ph2-PUGv2_2.0.pdf.
Xu, P., Herold, M., Tsendbazar, N.-E., & Clevers, J.G.P.W., 2020. Towards a comprehensive and consistent global aquatic land cover characterization framework addressing multiple user needs. Remote Sensing of Environment, 250, 112034.
Xu, P., Tsendbazar, N.-E., Herold, M., & Clevers, J.G.P.W., 2021. Assessing a Prototype Database for Comprehensive Global Aquatic Land Cover Mapping. Remote Sensing, 13.
Zanaga, D., Van De Kerchove, R., De Keersmaecker, W., Souverijns, N., Brockmann, C., Quast, R., Wevers, J., Grosu, A., Paccini, A., Vergnaud, S., Cartus, O., Santoro, M., Fritz, S., Georgieva, I., Lesiv, M., Carter, S., Herold, M., Li, L., Tsendbazar, N.-E., Ramoino, F., Arino, O., 2021. ESA WorldCover 10 m 2020 v100. Zenodo. https://doi.org/10.5281/ZENODO.5571936.
"Buy land, God doesn't create any more." This aphorism by Mark Twain can be considered as the guiding principle for humankind, evident from the steadily increasing global land consumption. Agricultural land use grew by a global average of 3.8 MHa per year during 1961–2007, most of which was distributed among the developing countries where the rate continued to increase even during 1990–2007. The term “land grab” was coined in this context, and is related to the increased number of large-scale and commercial land deals by international actors. Two months after the hurricanes Irma and Maria hit Barbuda in 2017, the construction of a new international airport led to accusations of degrading the Codrington Lagoon National Park and contravening the conventions of the Ramsar Program. Scientists have analyzed the aftermath with respect to historical legacies, disaster capitalism, manifestation of climate injustices and green gentrification. However, no attempt has been made to quantify and allocate land use and land cover change (LULCC) of Barbuda before and after the 2017 Hurricane disasters. Remote sensing data and volunteered geographic information were analyzed to detect the potential changes in natural LULC related to human activities. We processed Sentinel-1, Sentinel-2, NOAA VIIRS data, MODIS Terra, and PlanetScope data, and obtained data from the OSM archive via the Ohsome API and Twitter via twarc2 API. We observed that human-induced LULCC is occurring on different sites on the island, with decreased activities in Codrington, but increased and ongoing activities leading to a LULCC in Coco Point and Palmetto Point. In total, 2.97 km2 of new areas that are covered by “bare soil and artificial surfaces” fell into the natural reserve of the Ramsar site of the Codrington Lagoon. With an accuracy of 97.1 %, we estimated a total increase of vegetated areas by 6.56 km2 and a simultaneous increase in roads and buildings with a total length of 249.67 km and a total area of 1.43 km2; this includes the area under construction of the central international airport. The satellite classification measures an area of ~1.09 km2, which is ten times the combined sum of all the buildings mapped with the OSM. The vegetation condition itself depict a steady decrease since 2017. While some places show a decrease in human activity, such as Codrington and the Lighthouse Bay Resort, other places experienced increased human activities. They became new nighttime light radiance hotspots on the island. Since these hotspots were the sites of the Barbudan Ocean Club, the dispute along the human-induced LULCC in the aftermath of the 2017 Hurricanes will continue.
Wetlands are globally threatened by degradation and disappearance under the combined effects of increasing anthropogenic disturbances and climatic extremes. These pressures may drive abrupt shifts in wetland ecosystem dynamics, which necessitate robust long-term monitoring techniques for their study. Here, we used a piece-wise regression model to characterize long-term Ecosystem Change Types (ECT) in dynamic wetland surface water and vegetation proxies (e.g., Modified Normalized Difference Water Index (MNDWI) and Normalized Difference Vegetation Index (NDVI)) calculated from twenty years (2000¬¬ – 2019) of MODIS and Landsat time series imagery over the Inner Niger Delta in Mali. In addition, we investigated the added benefits of using a dense Landsat time series for our segmented trend analysis by comparing the class-specific accuracies of the detected ECTs with those produced on the MODIS scale. We developed a reference dataset by validating temporal trajectories at selected probability sample locations based on the TimeSync logic. Our results have shown statistically significant (p < 0.05) long-term trends in wetland surface dynamics, along with higher overall, user’s and producer’s accuracies for the Landsat ECT map (OA = 0.89 ±0.01), surpassing that of the MOD09A1-derived product (OA = 0.37 ±0.03). This study demonstrated a robust approach for long-term wetland monitoring that highlights the benefits of using time-series imagery with spatial resolutions on the Landsat scale for accurate quantifications of linear and non-linear ecosystem responses in vast highly diverse floodplain systems. Investigation into the transferability of our framework to other wetland types is subject to ongoing work. Delivering such improved assessment that better resolves the spatial and temporal characteristics of wetland ecosystems has the potential to support the information needs of global conservation and restoration efforts.
Floodplains account for nearly one-third of the Amazon basin. Especially the young, early successional white-water alluvial forests (várzea) are amongst the most productive ecosystems on our planet. Low-várzea is among the most productive forest types in the world. It is characterized by densely growing mono-specific stands and high flooding pressure. Species abundance decreases with decreased flood duration and depth, whereas species diversity increases likewise. While the várzea forests along the main river channel have been widely researched since the late 1980s, little is known about the extent and dynamics of the inundation within várzea forests along the Juruá, a major tributary to the Amazon main stem.
This study mapped spatio-temporal floodplain dynamics on a subset of the Juruá river floodplain. The objective of the research study was to determine the extent and duration of inundation along the Juruá River, from which metrics of biodiversity and productivity can be derived. Data from Copernicus Sentinel-2 and ALOS-2/PALSAR-2 were explored. Furthermore, the study assessed the applicability of microwave remote sensing, especially PolSAR to tropical floodplain land cover and inundation mapping.
The study is divided into three main steps. In a first step, land cover classification is performed based on a Sentinel-2 segmentation. A random forest model was trained using Sentinel-2 data and another one using multi-temporal PALSAR-2 PolSAR products of the Yamaguchi four-component decomposition and the Shannon Entropy. The land cover classification results were inter-compared. The forest objects were re-segmented in a second step using the Yamaguchi four-component double-bounce component of the PolSAR data in order to allow a better distinction between floodplain and upland forest. Subsequently, the floodplain extent was mapped using only a PALSAR-2 PolSAR time-series, relying on multi-temporal metrics of the L-band backscatter and some polarimetric products (Yamaguchi four-component and Shannon Entropy). The third step consisted of the estimation of the inundation duration of the objects within the floodplain area and was performed using an L-band HH time-series in combination with in situ water level information.
The study area was covered by forest on 3411.17 km², from which 1103.97 km² were flooded at high water level. Small herbaceous vegetation, bare soil, and open water covered 46.19 km², 18.95 km², and 63.28 km², respectively. For the land cover, the random forest algorithm achieved an estimated overall accuracy of 97.4 %. The floodplain extent mapped by the PolSAR random forest model was only slightly more accurate than an L-HH threshold-based classifier (90.3 % and 89 %, respectively). Inundation duration varied from 27 to 338 days per year. Most objects were inundated for 111 days and 53 days. Inundation depth ranged from 7.6 cm to 1342.9 cm. According to the findings in the literature with a maximum inundation depth of 3 m for high-várzea, a total area of 700.68 km² is covered by high-várzea, and the remaining 221.37 km² by low-várzea. High-várzea was inundated for a time period of up to 163 days. This is about one month longer than observed in a similar study along the Solimões river in the central Amazon by Ferreira-Ferreira et al. (2015).
To further investigate the distinction of high-várzea and low-várzea, spaceborne LiDAR derived canopy height and vegetation structure information could be used. Low-várzea is characterized by shorter tree species (30 m – 35 m), whereas in high-várzea individual trees of up to 45 m canopy height are reported. The results will help to identify biodiversity hotspots and the monitoring of floodplain forests in the Amazon.
References:
Ferreira-Ferreira, J., Silva, T.S.F., Streher, A.S., Affonso, A.G., Almeida Furtado, L.F. de, Forsberg, B.R., Valsecchi, J., Queiroz, H.L., & Moraes Novo, E.M.L. de (2015). Combining ALOS/PALSAR derived vegetation structure and inundation patterns to characterize major vegetation types in the Mamirauá Sustainable Development Reserve, Central Amazon floodplain, Brazil. Wetlands Ecology and Management, 23, 41–59.
Wetlands are essential ecosystems that provide a variety of services to humans and the environment. In recent years, wetlands have been impacted by climatic and human drivers, requiring a deeper understanding of the resilience of these essential ecosystems to change. Water delineation is essential to understand how water availability changes and wetland monitoring has improved thanks to recent developments in remote sensing. This is especially the case in Sweden, one of the European countries with the highest amount of wetland water surface extent. However, quantifying wetland water extent and its changes remains a challenge. Standard detection of water surfaces by optical sensors can only recognize open waters, missing water below vegetation. Here, we used a multi-sensor approach utilizing different polarizations of Synethtiuc Aperture Radar SAR and the optical data time series of ESA Sentinel-1&2 satellites during two seasons to identify these waters and their changes in 9 Swedish wetlands of the Ramsar Convention. After pre-processing SAR images and filtering cloudy images of Sentinel-2 in the cloud computing platform of Google Earth Engine (GEE), we created composite images from three different layers: the radar backscattering coefficient, radar polarization difference, and Normalized Difference Vegetation Index from the optical image. Then we took advantage of the machine learning K-means clustering method to detect the increased backscatter due to the double-bounce of the radar signal to recognize water below vegetation. We also investigated the increase in interferometric coherence as an indicator of submerged vegetation. As a result, we obtained water inundation frequency maps for the Ramsar wetlands and compared our results with filed data and hydroclimatic data. Our approach identified on average around 20 percent of areas with water below vegetation that optical-only techniques missed, allowing us to delineate water extent in Swedish wetlands better. We recommend integrating polarimetric features of Radar data, optical data, and interferometry to account for wetland surface water extent and its changes completely, thereby improving surface water quantification
The project MONEOWET focuses on multispectral and hyperspectral Earth Observation (EO) data to investigate water quality in relation to agricultural activities within the Térraba Sièrpe Wetland in Costa Rica. This study corresponds to an initiative focused on investigating the applicability of remote sensing data in tropical systems. The main topic of this project is the use of EO data to assess the impacts and dynamics of agricultural activities on the sensitive RAMSAR wetland ecosystem Térraba Sièrpe at the mouth of the Térraba and Sièrpe rivers. One goal of this project is to develop a first EO database and define analytical methods for water quality studies in that area and beyond. The results will provide a deeper insight into the processes of the entire wetland ecosystem and may help to detect harmful damage to the fragile environment caused by surrounding agricultural activities. The long-term goal is sustainable water and land use management that is exemplary for many other tropical wetlands in Latin America.
Scientists from Germany and Costa Rica are working together to collect data with established (e.g. Sentinel 2, Landsat 8) and new Earth Observation sensors (e.g. DESIS on the ISS) to assess water quality parameters and link these parameters to agricultural land use in the surrounding area. The common goal of the project is to evaluate the applicability of Landsat 8, Sentinel-2 and DESIS multi- and hyperspectral satellite imagery for water quality studies in tropical environments.
Field campaigns were carried out during wet season (November 2018 and November 2019) and dry season (March 2019 and March 2021). The sampling sites for in-situ measurements were taken in the three main meanders of the Sièrpe River and the main meander of Térraba River within the wetland. At each sampling site, the spectral signature of the river was recorded using an Ocean Optics Sensor System (OOSS). The multispectral (Sentinel 2, Landsat 8) and hyperspectral EO (DESIS) data were atmospherically corrected to Bottom-of-atmosphere (BOA) reflectance using Sen2cor (ESA) and PACO (Python-based Atmospheric Correction, DLR), respectively. The WASI-2D inversion method, a semi-analytical model, which retrieves the optically active water quality variables: chlorophyll, total suspended matter (TSM) and colored dissolved organic matter (CDOM) was used and parameterized with site - specific inherent optical properties (SIOPs) of the area and applied to time series of L2A Sentinel, Landsat 8 and DESIS images. Some of the Sentinel-2 and Landsat overpasses were coincident with available field data, however DESIS images could not be obtained during field campaigns, thus only a qualitative evaluation is presented. Although cloud cover in the tropics is a major challenge, the influence of thin clouds could be corrected and the concentrations of TSM and CDOM could be derived quantitatively. Chlorophyll could not be derived reliably in most areas, in particular not from Landsat 8, most likely because its concentration was relatively low and water absorption was dominated by CDOM. The high temporal dynamics of the river system, which is strongly influenced by tides, makes comparison of satellite data collected at different times very difficult, as is comparison with field data. Nevertheless, Sentinel 2-derived maps of water constituents and corresponding Landsat 8 and DESIS images show good agreements in the average concentrations of TSM and CDOM concentration and plausible spatial patterns, and field measurements show that they are in a plausible range. The results indicate that under favorable observational and environmental conditions, the applied atmospheric correction and the used retrieval algorithm are suitable to use DESIS, Sentinel 2 and Landsat 8 data for mapping TSM and CDOM in tropical environments, while chlorophyll is challenging. Their quantitative determination by satellite is therefore an important contribution of this project to the ecological assessment of the waters and the surrounding environment of the study area.
When autumn changes to winter in northern latitudes, the wetland methane emissions are suppressed due to low temperatures and freezing of the soil. During the period of change from non-frozen autumn state to fully frozen winter state, however, methane is still emitted to the atmosphere and the emissions are potentially significant in relation to the total annual budget. A longer freezing period might indicate higher emissions out of the growing season. The length of the freezing period and corresponding methane emissions may differ in permafrost and non-permafrost regions, as well as in different vegetation zones. We estimate the methane fluxes at northern latitudes in Eurasia and northern North America with Carbon Tracker Europe – CH4 (CTE-CH4) atmospheric inversion model and combine the results with soil freeze data from satellite (SMOS) to find out whether there are significant late autumn season emissions and whether they continue throughout the period when soil freezes to winter state. We investigate the emissions in permafrost, discontinuous permafrost and non-permafrost regions and in regions divided by different vegetation types and climate subgroups. CTE-CH4 optimizes both anthropogenic and biospheric fluxes, and the current in situ observation network at northern latitudes enables spatially explicit flux estimates. Fluxes are solved in weekly time resolution, enabling the follow-up of soil freeze development.
Satellite remote sensing provides data for the observation of spatial and temporal conditions and changes of environment including vegetation, soil and atmosphere. The immanent part of satellite remote sensing applications are in-situ measurements. Due to combine these two kinds of data it is possible to achieve new possibilities. Despite the undoubted advantages of space-borne data, such as a large area of analysis and regular data provision, these data require verification. In-situ measurements provide data for validation and calibration of the satellite data and models performed by using the satellite data. On the other hand, in-situ measurements are often sparse and locally representative. Thus, in-situ measurements are required in order to interlink the sensor’s signal to actual situation while satellite data enables to use field data in wider contest.
One of the areas of interest of studies performed around the world are wetland and grasslands areas. They are one of the most important ecosystems on Earth. As a reservoir of biomass and CO2, wetlands interact with climate change. What is more, the wetlands and grasslands improve water quality, recharge groundwater, make for habitat for many animals and plants and maintain biodiversity. In this connection, regular and large-scale monitoring of wetlands and grasslands is highly required.
There are many parameters which are carried out over wetlands and grassland: CO2 fluxes, leaf area index (LAI), surface temperature, air temperature, soil moisture and chlorophyll. Furthermore, novel techniques such as chlorophyll and spectral are used to be measured.
The critical factor for getting an accurate characterization of the test site is sampling strategy. The main objective of this research is to present various method of in-situ measurements of spectral reflectance, chlorophyll fluorescence, LAI, APAR and fAPAR, CO2 fluxes, land surface temperature, air temperature, soil moisture, chlorophyll, biomass, height of vegetation and soil temperature in respect to satellite remote sensing measurements. Different methods including linear and cross transects methods as well as our own square IGIK methods were analysed. Moreover, this study aim at pointing limitation and feasibility of single measurements (spectra response, LAI etc.). Harmonization and orderliness of in-situ measurements techniques were the motivation to perform this study.
Biebrza National Park and its buffer zone was the study area of the research. This is the largest National Park in Poland with a total area of 59223 ha. The area of the buffer zone is 66824 ha. In the Biebrza National Park there are water, marsh, peats, rushes, as well as forest communities (alder, birch, riparian forests). The Biebrza National Park covers a large part of the Biebrza Valley - a great depression of the area over 100 km long.
Measurements were carried out at 26 test sites including grasslands (12 points), sedges (12 points) and reeds (2 points). Three kinds of vegetation were distinguished due to their different anatomy, growth, time of cuts and size. What is more, further analysis (eg. modelling) were performed separately for these three kind of vegetation. Lesser number of test reeds test sites was caused by limited possibilities to access to such areas. Five test sites were located outside the Park and its buffer zone but they were located no further than 5 km. Test sites were chosen according to their distance from the Biebrza River, differences in soil moisture and intensity of vegetation. The goal of well distribution of points was to observe the variability of vegetation and its spatial differentiation. Data have been collected from 2016 to 2020 during the vegetation season (April-October). There were performed at least four campaigns per year. In situ measurements were synchronized with acquisition of Sentinel-2 or Sentinel-1 (+- three days if there was no rain). They were carried out between 9 a.m. and 5 p.m.
The field measurements were carried out according to three schemes: the linear transect, the cross transect and the square IGIK methods. The linear transect consists of 7 – 9 measurement points. Distances between points were ca. 50 – 80 m. The transect in a straight line. The cross transect consisted of 11 measurement points. Distances between points were ca. 10 m. The recordings of square IGIK methods were taken in the north, south, east, and west corners at 80-m intervals to capture all variation. Vegetation sampling was designed in a square shape and samples were taken at different locations under representative conditions.
The results obtained indicate that each sampling scheme has advantages and disadvantages. The values of the measured variables vary between 5-15%. The greatest differences in values between the measured wetlands are found for soil moisture (14.5%) and LAI (13.9%) measurements, while the most similar results were obtained for spectral reflectance (5.4%) and chlorophyll content (6.6%). Methods should be selected appropriately depending on the biophysical parameter under study.
To sum up, measurements have been performed in respect to four methods: the linear transect, the cross transect, the square IGIK methods and single measurements. It is supposed that parameters which variability (LAI, soil moisture) was high should be collected from many samples across the test site. On the other hand, parameters characterized by low variability (spectral reflectance, chlorophyll content) could be measured at lesser points within the across the test site.
The research work was conducted within the project financed by the National Centre for Research and Development under Contract No. 2016/23/B/ST10/03155, titled "Modeling of carbon balance at wetlands applying the newest ESA satellite missions Sentinel-1/2/3".
The wetlands in the Prairie Pothole Region (PPR) are of critical importance as habitat and breeding grounds for the North American waterfowl population. Like many wetlands, they are threatened by climate change and intensifying agriculture. Monitoring these wetlands is therefore an important source of information for landscape management. Pothole wetlands range in size from a few square metres to several square kilometres. Larger wetlands covered by open water surfaces can be monitored using optical or radar satellite imagery. Smaller wetlands (< ca. 1 ha) are more challenging to delineate due to the moderate spatial resolution of most satellite sensors (typically in the range of a few tens of metres) and due to vegetation frequently emerging from the water surface of shallow water bodies. However, these small wetlands have been shown to be of high importance as habitats as well as linkages between larger wetlands, thus contributing to hydrological and biological connectivity. Radar imagery has been used for detecting water underneath vegetation based on double-bounce scattering leading to high radar returns, however, this effect is highly dependent on factors, such as wavelength, polarisation, incidence angle and vegetation density and height relative to the water surface. Hence, information gathered in situ is often required to constrain retrieval models.
In this study, Sentinel-1 dual-polarised synthetic aperture radar (SAR) time series acquired between 2015 and 2021 are used in combination with water level measurements from a number of permanent and temporary wetlands in North Dakota. The study period covers hydrometeorological conditions ranging from drought to flooding. A Bayesian framework is applied to integrate high-resolution topographic data to constrain water delineation in areas with low contrast. Dual-polarised SAR backscatter from open and vegetated wetlands is compared with in-situ water level measurements.
The results for open water bodies show that small and large wetlands differ in seasonality as well as in their response to wet and dry years. While large water bodies are mostly stable throughout the year, many small water bodies fall dry during the summer months, when evaporation exceeds moisture supply. During wet periods, prairie hydrological processes, such as merging between neighbouring wetlands, can be observed. The effects of drought years, such as the exceptionally dry year 2021, are visible across wetland size classes, however, larger wetlands (> ca. 8 ha) tend to be more stable than smaller ones. First results of the comparison between backscatter and water level generally show an increase of co-polarised (VV) backscatter in temporary wetlands with falling water levels, whereas the cross-polarised (VH) signal tends to be more stable. This is in line with our expectations as double-bounce scattering mainly affects the co-polarised radar signal. The results demonstrate the potential of dual-polarised Sentinel-1 image time series for high-resolution monitoring of prairie wetlands. Limitations of this study are related to wind inhibiting correct open water extent retrieval and due to the rather long acquisition interval of 12 days over the PPR, which is a result of the observation strategy of Sentinel-1.
Wetlands and inundated areas only cover a few percent of the Earth surface. However, they play an important role in climate variability. In particular, an important fraction of atmospheric methane is emitted in these areas [1]. CH4 emission in wet, saturated and inundated areas is an important issue to better understand methane atmospheric concentration intra-year variability and variations over the past decades. Therefore, there is a need to produce data records that can reliably capture variability linked to climate variations [2].
The goal of this work is to model methane emissions from wetlands and inundated areas on a simple, data-driven way. The calculations is dynamic (monthly) and global (we target 0.25° resolution). In order to expect temporaly and spatially realistic global CH4 emission, the continuous global input parameters (soil carbon content, soil temperature, water extent or water table, etc.) are as much as possible derived from measured or satellite-derived datasets, and not from climatic model outputs. Preliminary work focuses on the choice of parameters and datasets that will be used in the scheme in order to select the most relevant and up-to-date ones. For water extend, a complete database is used, that contains all wetlands and inundated areas types: wetlands (incl. peatlands), open-water extents, and rice paddies. This database mainly relies on GIEMS-2 developed by Prigent et al (2020) [3] that is being extended until 2020.
To calculate CH4 emission fluxes, a simple scheme is used, similar to the one of Gedney et al. 2004 [4]:
F_CH4=k_CH4 f_w C_s Q_10 (T_soil )^((T_soil-T_0)/10)
With F_CH4 the methane emission flux from wetlands, k_CH4 a global constant, f_w the wetland fraction, C_s the soil organic carbon, Q_10 a temperature sensitivity factor, T_soil the soil temperature in K, and T_0 a constant equals to 273,15K. This shape of scheme is similar to the ones that can be found for the methane production equation in climate models such as CTESSEL from ECMWF, ORCHIDEE-WET [5], or JULES [6]. Transport throught the water (ebullition, diffusion and plant mediated transport) could be added to complete this simple scheme.
The global distribution and inter-annual variability of resulting emissions will be presented, and discussed in the context of other existing methane estimates, such as in situ flux databases (e.g. FLUXNET-CH4 [7], BAWLD-CH4 [8]), or from current inventories of greenhouse gases [9].
[1] Saunois et al.: The Global Methane Budget 2000–2017, Earth Syst. Sci. Data, 12, 1561–1623, https://doi.org/10.5194/essd-12-1561-2020, 2020
[2] Kirschke, S., Bousquet, P., Ciais, P., Saunois, M., Canadell, J. G., Dlugokencky, E. J., et al., Three decades of global methane sources and sinks. Nature Geoscience, 6(10), 813–823. https://doi.org/10.1038/ngeo1955, 2013.
[3] Prigent, C., Jimenez, C., & Bousquet, P., Satellite-derived global surface water extent and dynamics over the last 25 years (GIEMS-2). Journal of Geophysical Research: Atmospheres, 125, e2019JD030711. https://doi.org/10. 1029/2019JD030711, 2020.
[4] Gedney, N., P. M. Cox, and C. Huntingford, Climate feedback from wetland methane emissions, Geophys. Res. Lett., 31, L20503, doi:10.1029/ 2004GL020919, 2004.
[5] Ringeval, B., de Noblet‐Ducoudré, N., Ciais, P. Bousquet, P., Prigent, C., Papa, F., B. Rossow, W. An attempt to quantify the impact of changes in wetland extenton methane emissions on the seasonal and interannual time scales, Global Biogeochemical Cycles, vol 24., doi:10.1029/2008GB003354, 2010.
[6] Clark, D. B., Mercado, L. M., Sitch, S., Jones, C. D., Gedney, N., Best, M. J., Pryor, M., Rooney, G. G., Essery, R. L. H., Blyth, E., Boucher, O., Harding, R. J., Huntingford, C., and Cox, P. M.: The Joint UK Land Environment Simulator (JULES), model description – Part 2: Carbon fluxes and vegetation dynamics, Geosci. Model Dev., 4, 701–722, https://doi.org/10.5194/gmd-4-701-2011, 2011.
[7]Delwiche et al., FLUXNET-CH4: A global, multi-ecosystem dataset and analysis of methane seasonality from freshwater wetlands. Earth System Science Data Discuss https://doi.org/10.5194/essd-13-3607-2021, 2021.
[8] Kuhn, M. A., Varner, R. K., Bastviken, D. J., Crill, P., MacIntyre, S., Turetsky, M. R., Walter, K., Anthony, McGuire, A. D., Olefeldt, D., BAWLD-CH4: Methane fluxes from Boreal and Arctic Ecosystems. Arctic Data Center. https://doi.org/10.18739/A2DN3ZX1R, 2021.
[9] Crippa, M., Guizzardi, D., Muntean, M., Schaaf, E., Dentener, F., van Aardenne, J. A., Monni, S., Doering, U., Olivier, J. G. J., Pagliari, V., and Janssens-Maenhout, G.: Gridded emissions of air pollutants for the period 1970–2012 within EDGAR v4.3.2, Earth Syst. Sci. Data, 10, 1987-2013, https://doi.org/10.5194/essd-10-1987-2018, 2018.
Water hyacinth (Pontederia crassipes) is one of the most invasive aquatic weed in the world and affects the aquatic life significantly and causes loss of biodiversity. Most of the affected countries are found to be in tropical and sub-tropical countries such as India. The environmental concerns caused due to invasion of exotic species across the globe led to intense research on this and similar alien species of plants and weeds [1]. In India, a Government initiative National Wetland Conservation Programme (NWCP) placed the problem of WH among the most serious threat to the National Lake, Conservation Plan. Water bodies such as the Katraj lake in Pune, the Pichhola Lake in Udaipur and Ulsooru lake in Bengaluru, Patancheru Lake in Hyderabad are facing serious WH invasion. In urban India, water hyacinth removal and remediation results in huge expenditure of public money and hence different methods were explored from time to time [2, 3].
The researchers approached different method to estimate WH presence and its growth pattern. The most common method was using ground data survey. Although such an approach is accurate to some extent because it is periodic over long time, it requires involvement of huge man power and government resources, has limited coverage and cannot provide continuous monitoring and analysis [4,5]. Alternative solutions include remote sensing methods [3], e.g., use of satellite imagery (multi-spectral or SAR) which are not always useful to detect and monitor WH growth on small water bodies due to lower resolution. In contrast, multi-spectral drone imagery provides very high-resolution data, beneficial to detect WH presence and its growth leading to better control measure. However, drones are more vulnerable to weather conditions when contrasted to satellite imagery. For example, if the climatic conditions are unfavourable, the drone will not manoeuvre appropriately or gather reliable data or imagery. There are concerns or troubles in drones that affects user’s navigation routine. A local government unit sometimes restricts the use of drones when there is an on-going military conflict. Drones are also blocked from entering a restricted zone such as military facilities that are conducting active military training inside their camps. By all these concerns, the satellite imagery is still attractive for environmental applications. This paper proposes the use of an emerging transfer learning from very high resolution multi-spectral drone to lower resolution satellite data. One possible solution to overcome the challenges of drone usage is to collect satellite data close in time with drone data and apply transfer learning between drone and satellite domains. A standard transfer learning approach from one sensor to another is applied to generate large optical drone data from satellite SAR data. When the multispectral drone data are not available, the generated drone data can be alternative data that aid in water hyacinth growth monitoring.
In this work we selected Patancheru Lake in Hyderabad, India as our study area that has notable WH presence largely due to pollution caused by waste water from nearby industrial hub. Patancheru Lake, lat: 17°31’19.42”N; long: 78°15’50.39”E is a peri-urban water body located about 31 km from the city center of Hyderabad on the national highway (NH)-65. The lake receives anthropogenic pollution from the peri-urban settlements located in its catchment. Very high-resolution multispectral drone and spot-light single-channel ICEYE radar data were collected every month starting from January 2021. Multi-spectral data was collected by flying an unmanned aerial vehicle (UAV), a quad-copter drone (Model V, CBAI Technologies, Hyderabad, India), equipped with a MicaSense RedEdge multi-spectral camera. The spectral data was collected at an altitude of 80 meters at a speed of 6.5 m/s and a ground sampling distance of 5.56 cm at a capture rate of 1 capture per second for all bands having a 12-bit RAW file. Spectral bands include Blue (475 nm), Green (560 nm), Red (668 nm), Red edge (717 nm) and near-Infra-Red (842 nm).
Optical drone multispectral remote sensing data offers a very high spatial resolution up to 5 cm with more detailed patterns for water hyacinth growth. ICEYE synthetic aperture radar (SAR) satellite image offers with a high resolution up to 25 cm. However, due to the imaging mechanism of SAR and the speckle noise, untrained people are difficult to recognize the water hyacinth growth patterns visually from SAR images. Inspired by the image-to-image translation performance of Generative Adversarial Networks (GANs) [6], a transfer learning approach from ICEYE to drone data is applied to generate large optical drone. At the moment, we are testing the image-to image translation between multispectral drone and ICEYE satellite data and this will be tested between drone freely available medium-resolution satellite SAR data from Sentinel-1 data. The main steps of SAR-to-optical image translation are as follows. First, the 5cm resolution of drone multispectral drone is upscaled to the 50cm resolution of ICEYE. We also transform the ground truth labels at 5cm to get a drone upscaled dataset. Then, the large drone and satellite SAR images are split to small patches. In the third step, Cycle GAN [7] is used to translate the SAR patches to optical image patches. The translation should convert single-channel ICEYE SAR image to multispectral drone image. The main purpose of selecting Cycle GAN among several deep learning techniques is that it preserves the structure information. Following this approach, one can use the drone upscaled dataset to train a water hyacinth detection model that could be directly used on ICEYE images. Finally, the optical image patches are stitched to generate the large optical image. The experiments are in progress, and the results will be presented in the conference and will be reported in the full paper.
References
[1] C. S. Elton, The ecology of invasions by animals and plants. Springer Nature, 2020.
[2] J. H. Bock, “Productivity of the water hyacinth eichhor- nia crassipes (mart.) solms,” Ecology, vol. 50, no. 3, pp. 460–464, 1969.
[3] A. Datta, S. Maharaj, G. N. Prabhu, D. Bhowmik, A. Marino, V. Akbari, S. Rupavatharam, J. A. R. Su- jeetha, G. G. Anantrao, V. K. Poduvattil et al., “Monitor- ing the spread of water hyacinth (pontederia crassipes): challenges and future developments,” Frontiers in Ecol- ogy and Evolution, vol. 9, 2021.
[4] A. Villamagna and B. Murphy, “Ecological and socio- economic impacts of invasive water hyacinth (eichhor- nia crassipes): a review,” Freshwater biology, vol. 55, no. 2, pp. 282–298, 2010.
[5] K. Kipng’eno, “Monitoring the spread of water hyacinth using satellite imagery a case study of lake victoria,” Ph.D. dissertation, University of Nairobi, 2019.
[6] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. engio, Generative adversarial networks,’’ in Proc. Adv. Neural Inf. Process. Syst., vol. 3, 2014, pp. 2672–2680.
[7]. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, ‘‘Unpaired image-to-image translation using cycle-consistent adversarial networks,’’ in Proc. IEEE Int. Conf. Comput. Vis., Oct. 2017, pp. 2242–2251.
Products relevant to wetland monitoring from the ESA Scout-2 HydroGNSS mission:
Scout missions are a new Element in ESA’s FutureEO Programme, demonstrating science from small satellites. The aim is to tap into the New Space approach, targeting three years from kick off to launch, and within a budget of €30m, including launch and commissioning of space and ground segments. The Scout missions are non-commercial and scientific in nature; data will be made available freely using a data service delivery approach. HydroGNSS has been selected as the second ESA Scout Earth Observation mission, primed by Surrey Satellite Technology Ltd, with support from a team of scientific institutions. Its implementation has kicked off in Q4 2021, and the launch is planned by H2 2024.
The microsatellite uses established and new GNSS-Reflectometry (GNSS-R) techniques to take four land-based hydrological climate variables; soil moisture, freeze/thaw, inundation and biomass. The initial project is for a single satellite in a near-polar sun synchronous orbit at 550 km altitude that will approach global coverage monthly, but an option to add a second satellite has been proposed that would halve the time to cover the globe, and eventually a future constellation could be affordably deployed to achieve daily revisits.
GNSS-R is a relatively novel technique that uses the navigation signals transmitted by the Global Navigation Satellite System (GNSS) for remote sensing purposes: after these signals bounce off the Earth surface, they are collected by dedicated GNSS-R receivers and analysed to extract the geophysical information of the reflected ‘echo’. Earlier GNSS-R spaceborne missions such as UK TDS-1 and NASA CyGNSS have provided quality data that prove the sensitivity of these L-band (~19 cm wavelength) reflected signals to surface inundation [e.g., 1], even when the flooded areas lie under thick vegetation canopies [e.g., 2]. The forward scattering geometry particular of GNSS-R is especially sensitive to the highly reflecting and smooth surfaces in inundated terrains, and it complements other techniques at higher frequencies and/or other geometries, such the back-scattering geometry characteristic of other radar-based sensors (SARs, scatterometers, radar altimeters). The GNSS reflected signals have also proven sensitivity to above ground biomass (AGB) as the attenuation induced by the vegetation modulates the measured reflectivity [e.g., 3].
Furthermore, HydroGNSS will operate, for the first time, a receiver channel to spit very high sampling rate complex (phase and amplitude) data that corresponds to the reflected electromagnetic phasor. This is called ‘coherent channel’.
Two of the level-2 baseline products to be provided by HydroGNSS are relevant to monitoring wetlands: firstly the surface inundation product will be capable to identify flooded areas even under thick vegetation canopies, and with the increased resolution provided by the coherent channel. Secondly, the Forest AGB could be a parameter of interest for monitoring low latitude and forested wetlands. Beyond the baseline products, the coherent channel opens the possibility of investigating other demonstration products, such as precise altimetry across some flooded areas, using the range information embedded in the electromagnetic field of the reflected signals. As it will be described in this presentation, the HydroGNSS mission will enable the study of the potential application of these combined parameters for wetland monitoring.
[1] Nghiem et al., 2016, doi:10.1002/2016EA000194
[2] Rodriguez-Alvarez et al., 2019, doi:10.3390/rs11091053
[3] Santi et al., 2020, doi:10.1109/JSTARS.2020.2982993
Wetlands provide a range of benefits to humankind on different scales, from carbon sequestration and biodiversity conservation to water and food provision. Wetlands cover roughly 7% of the African continent and due to their fertile soils and higher water availability, they are being increasingly developed for agricultural use to counteract dependency on global food markets and reduce hunger and poverty. Yet, agricultural wetland development is among the main drivers of wetland degradation in Africa. For the sustainable use of wetlands, decision-makers and managing institutions need quantified information on wetland ecosystems to provide the necessary knowledge basis for their management, which is particularly incomplete for the African continent. Remote sensing provides a significant potential for wetland mapping, inventorying and monitoring. Previous work employing Earth observation for wetland management in the context of agricultural development often revolved around specific land uses and usually require ancillary data on the crops, which can be difficult to obtain, in particular where small-scale farmers’ fields are fragmented, used inconsistently and where intercropping is practiced. In contrast, the Wetland Use Intensity (WUI) indicator is not specific to a particular crop and requires little ancillary data. The indicator is based on the Mean Absolute Spectral Dynamics (MASD), which is a cumulative measure of reflectance change across a time series of satellite images. The WUI depicts the intensity of wetland management practices like harvesting, intercropping, fertilizer application, flooding, or burning that are reflected by the land cover.
We therefore implemented an automated approach for WUI calculation and developed a method of quantitatively comparing WUI values to a wetland ecosystem integrity scoring system. We leveraged cloud-computing technology through the Google Earth Engine (GEE) platform by using a Sentinel-2 surface reflectance image collection and by adapting the S2cloudless algorithm to the GEE JavaScript API. We established a regular time series over a pseudo-year 2020 with bi-monthly median mosaics from July 2019 to June 2021 as the basis of MASD calculation. In order to compare it to a meaningful ground reference, we selected two datasets of 250x250 m field plots across Rwanda assessed according to the WET-Health approach in 2013 within the GlobE project* and in 2018 within the DeMo-Wetlands** project. WET-Health is an approach for rapid wetland condition assessment, which accounts for wetland complexity by using a scoring system for wetland hydrology, geomorphology and vegetation. The datasets were tested for spectral and geometric comparability to the observation period. A surface water dynamics layer was derived from individual flood layers based on thresholding of Sentinel-1 imagery, and the resulting flooding regime assigned to each plot. We then evaluated the correlation between WUI values and WET-Health scores, taking into account the flooding regime.
The results suggest that the adapted WUI indicator is informative and applicable for wetland management. The possibility to measure use intensity as a proxy for ecosystem condition is useful to stakeholders in wetland management, both from the agriculture and from the conservation sides.
* Funded by the German Federal Ministry of Education and Research, FKZ 031A250 A-H
** Funded by the German Federal Ministry for Economic Affairs and Energy, FKZ 50EE1537
Wetlands are biodiversity hotspots that offer several ecosystem services necessary for human well-being. While playing a key role in global climate regulation, wetlands are sensitive to anthropogenic disturbances and climate change and, therefore, are currently endangered. This raises the need for accurate and up-to-date information on the spatial and temporal variability of wetlands and the climate and anthropogenic pressures that these ecosystems are facing.
Located in the central portion of South America, the Pantanal biome, distributed over three countries (Brazil, Bolivia, and Paraguay), is the largest tropical wetland in the world. With more than 84% of its territory preserved, the Brazilian portion of the Pantanal biome is also the wetland with the largest area of natural vegetation in the world. It is a seasonally flooded wetland composed of several interconnected ecosystems shaped by natural and anthropogenic factors. Since 2019, this region has been suffering a prolonged drought that was exacerbated in 2020. This drought was caused by the reduced transport of warm and humid Summer air from Amazonia into the Pantanal that led to the lowest rainfall during the Summer of 2019 and 2020, considering the period between 1982 and 2020. Severe and prolonged drought events are becoming more frequent in the Pantanal. Such a scenario favored the occurrence of natural disasters in the Pantanal and led to the 2020 Pantanal fire crisis. During this crisis, remote sensing-based burned area estimates showed that up to one-third of the Pantanal was burned. However, the extension of burned area during this crisis varied widely depending on the burned area product. While global burned area products have been successfully generated at coarser spatial resolution (250-500 meters) using time series of MODIS sensor observations (e.g., MCD64A1 collection 6.0 and Fire_cci version 5.1 products), the use of medium spatial resolution images, especially Landsat-derived images (30 meters), to automatically map burned area (e.g., MapBiomas Fire collection 1.0 and GABAM products) remains challenging due to the smaller number of observations available (up to two per month, or 16 days acquisition frequency). Therefore, MapBiomas Fire and GABAM burned area products usually underestimate burned area when compared to MODIS-based products. This underestimate of burned area derived from Landsat-based products raises the possibility of using the medium spatial resolution images derived from Copernicus Sentinel-2 missions to develop a more accurate burned area product, by combining a higher spatial resolution when compared to MODIS-based products. They also have a higher number of observations when compared to Landsat-based products (up to six per month when using Sentinel-2A and 2B).
In this context, the objective of this paper was to assess the use of Sentinel-2 images to map burned areas occurred in the Brazilian portion of the Pantanal biome in 2020. For this purpose, we applied the Linear Spectral Mixing Model (LSMM) to Sentinel-2 MultiSpectral Instrument (MSI) sensor images to generate the vegetation, soil, and shade fraction images. Given the fact that shade fraction image characteristics enhance the burned areas, we can use it as a burned index. We obtained the Sentinel-2 monthly composites for 2020 from the Google Earth Engine (GEE) platform. For the analysis, we built the composites corresponding to the endmembers with the highest fraction values during the month for the year 2020 for the study area, where the greatest values of shade highlight the areas occupied by water bodies and burned areas during the month. We were able to automatically map the burned areas in the study area, especially during the dry season of the Pantanal biome (April to August). Our results show an estimate of 53,510 km2 burned in the Brazilian portion of the Pantanal during the year 2020, which severely affected the flora and fauna causing biodiversity loss in this biome. The burned area estimate derived from Sentinel-2 images for 2020 in the Brazilian Pantanal biome was higher than those derived from MODIS-based and Landsat-based burned area products. While MCD64A1 estimated 35,837 km2 burned in the Pantanal in 2020, MapBiomas Fire and GABAM estimated, respectively, 23,372 km2 and 14,307 km2 burned. MCD64A1 estimate was 33% lower than our results, while MapBiomas and GABAM estimates were, respectively, 56% and 73% lower than ours. Fire_cci product is still not available for 2020. However, it is expected an estimate close to the one of MCD64A1 (35,837 km2), since the annual average burned area estimated by Fire_cci in the Pantanal between 2002-2019 (8,642 km2) was less than 1% higher than the one estimated by MCD64A1. We can conclude that the proposed approach based on Sentinel-2 images presents advantages when compared to the current burned area products made available for the Pantanal, and, therefore, can potentially refine the burned areas estimation on a regional scale. It can also be used as a reference for calibrating global burned area products. This calibration is of outmost importance because MODIS and Landsat images are made available for a longer time series (since 2000 and 1985, respectively) than Sentinel (2015), therefore, more accurate long-term spatial and temporal patterns of burned area can be obtained. Our results have potential to improve the estimate of trace gases and aerosols associated with biomass burning, where global biomass burning inventories are widely known for having biases on a regional scale.
Recent mangrove preservation efforts have set ambitious targets to conserve 30% of the world’s mangroves within the next decade. However, these efforts often lack the monitoring capacity to identify the success or failure of protected areas in real-time, creating a gap between initial targets and capacity to enforce them at the local scale. Here, we present the first global real-time monitoring platform for protected mangrove regions. We map past and current threats to mangroves within PAs at the 30-m resolution, enabling local to national decision-makers to both identify hotspot areas for mobilization and policy change to prevent loss. In documenting loss and threats across all mangrove protected areas globally, we create a new standard of transparency and accountability as we move towards tracking progress on national-to-global conservation goals. We provide real-time knowledge on the state of conservation efforts through publicly accessible and understandable tools, bridging the gap between past studies of mangrove loss drivers and actionable decision-making capacity on the ground. Broadly, we aim to prevent a system of “paper parks” in mangrove PAs, in which a region is protected by law but the enforcement and monitoring tools to ensure its success are lacking.
Through our remote sensing-based analysis of mangrove loss drivers within global protected areas, we find that conservation efforts have been largely successful in preventing human-driven loss over the last two decades. While approximately 60% of global mangrove losses from 2000-2016 resulted from anthropogenic threats such as conversions to aquaculture and agriculture, settlement, and clear-cutting, only 25% of losses within protected regions resulted from these drivers. Worldwide, three times as many PAs experienced natural loss than human-driven loss. Protected areas across Southeast Asia comprise the vast majority of these anthropogenic losses, with conversions to commodities comprising over 90% of anthropogenic PA losses throughout the region. While future conservation efforts must focus on finding and mitigating these exact hotspots of loss, we suggest that plans must primarily consider rehabilitation aimed towards mitigating damage from climatic stressors such as erosion and extreme weather events. Our global mangrove PA monitoring platform enables decision-makers to quickly identify these hotspots of human-driven loss, as well as quantify the PAs most vulnerable to future damage from these climatic threats.
Here, we present a model of easily transitioning quantitative analysis for SDG 6 (Clean Water and Sanitation) and SDG 15 (Life on Land) towards active decision-making tools to improve coastal conservation outcomes. Our PA monitoring platform enables users to efficiently gain a general understanding of the overall success of their PAs throughout the course of PA implementation. These tools enable scientific results on each SDG 6 and 15 indicator to be transferred into on-the-ground plans for targeting certain hotspot regions over others. Future efforts may also seek to integrate human wellbeing-oriented SDGs such as SDG 1 (Zero Poverty), SDG 8 (Decent Work and Economic Growth), and 11 (Sustainable Cities and Communities) into the PA effectiveness measures, providing a more holistic tool for policymakers to balance both human and natural needs in conservation planning efforts. Ultimately, we seek to pioneer new strategies for transitioning remote sensing insights into scalable platforms to ensure high levels of communication and transparency across all scales in conservation management.
Land cover change detection is challenging, as it can be caused by a large variety of processes, such as urbanisation, forest regrowth or land abandonment. It may also be confounded with spurious change, such as interannual variability due to droughts or fires (Gómez et al. 2016). There is also a mismatch between land cover and the reflectance that is captured by optical satellite sensors. Algorithms for land cover change detection often overestimate change because local disturbances that do not constitute a permanent land cover change are also captured by the algorithms.
Even more challenging is detecting gradual land cover change. This is possible using land cover fraction or probability maps, which estimate the proportion or likelihood of each land cover class per pixel and can therefore track both abrupt and gradual changes over time. The challenge comes from the uncertainty of these estimates, which are usually obtained from a regressor, as they can differ substantially between years and seasons. This often results in an overestimation of land cover change.
A potential solution to this problem is to use a change detection algorithm that uses times series to produce long-term trend information, such as BFAST Lite (Masiliūnas et al. 2021a). However, these algorithms traditionally use a vegetation index as an input, which limits the algorithms to detecting change between vegetated and non-vegetated land cover classes. To tackle this limitation, we propose a combination of a change detection algorithm with a land cover fraction time series, used as an input. Ideally this time series is dense, to make use of the algorithm's capability of modelling seasonal changes and to tolerate some noise from fraction uncertainty in the time series. The output model is then capable of capturing both the gradual change through tracking trends of each land cover class, as well as abrupt change when there is a sudden increase or decrease in a given land cover class fraction.
In this study, we implement such a workflow by using the full archive of Landsat 8 Surface Reflectance as an input to a Random Forest regression model, which predicts land cover fractions for every land cover class (Masiliūnas et al. 2021b) for every time step (every 16 days). The resulting land cover fraction time series is then used as an input into the BFAST Lite algorithm. If there is a significant jump in the time series of land cover fractions, the algorithm detects it and separates the time series into multiple segments. If there is no significant jump in a segment, the fitted model smooths out the observations to minimise the effect of noise and interannual variability. The result is a consistent, dense time series of each land cover fraction. Finally, the fractions are normalised so that they all sum to 100%.
The model output is validated using a land cover change dataset consisting of over 60,000 100x100 m areas with annotated land cover fractions that was collected for the creation of the Copernicus Global Land Services Land Cover 100 m product (Tsendbazar et al. 2020). The proposed approach is compared to the traditional way of using the change detection algorithms using a vegetation index as an input, as well as to using a regressor output directly, without a change detection algorithm.
The proposed approach leads to the creation of a set of global land cover fraction maps that would be updated every 16 days and would be internally consistent, with less overestimation of land cover change, and with smooth transitions at times of little change and sharp transitions in times of abrupt change. Such maps would be very valuable for climate change modelling, forest disturbance and degradation tracking, tracking the effect of disasters such as typhoons and forest fires over time, and would help with national land cover management efforts globally.
References:
Gómez, C., White, J. C., & Wulder, M. A. (2016). Optical remotely sensed time series data for land cover classification: A review. ISPRS Journal of Photogrammetry and Remote Sensing, 116, 55–72. https://doi.org/10.1016/j.isprsjprs.2016.03.008
Masiliūnas, D., Tsendbazar, N.-E., Herold, M., & Verbesselt, J. (2021a). BFAST Lite: A Lightweight Break Detection Method for Time Series Analysis. Remote Sensing, 13(16), 3308. https://doi.org/10.3390/rs13163308
Masiliūnas, D., Tsendbazar, N.-E., Herold, M., Lesiv, M., Buchhorn, M., & Verbesselt, J. (2021b). Global land characterisation using land cover fractions at 100 m resolution. Remote Sensing of Environment, 259, 112409. https://doi.org/10.1016/j.rse.2021.112409
Tsendbazar, N.-E., Tarko, A., Li, L., Herold, M., Lesiv, M., Fritz, S., & Maus, V. (2020). Copernicus Global Land Service: Land Cover 100m: version 3 Globe 2015-2019: Validation Report. Zenodo. https://doi.org/10.5281/zenodo.3938974
The Species Distribution Model (SDM) typically uses the land use land cover (LULC) variables with other predictor variables to project and map species distribution at the landscape level. Whereas LULC does not change significantly, cumulative change over time may substantially impact species interaction migration and distribution at a landscape scale. Hence, this study sought to explore the feasibility of remotely sensed data to map the change in the spatial distribution of the tomato leafminer, Tuta absoluta (Meyrick) (Lepidoptera: Gelechiidae), in Kenya's major tomato production counties. Firstly, the study classified a time series LULC from 2005 to 2020 with a 5-year interval in Kenya's major tomato production counties with Google Earth Engine (GEE) using a Random Forest algorithm with an overall accuracy greater than 0.9, and a Kappa of 0.92. The pattern of LULC as the percentage of the major part of the area studied was dominated by grass cover, covering about 50 % of the total studied area, followed by cropland (38%). Secondly, in the Maxent machine learning algorithm, the classified LULC map was combined with non-correlated 19 bioclimatic variables and T. absoluta occurrence data to map and classify (Very low (0-0.2), Low (0.2-0.4), Moderate (0.4-0.6), High (0.6-0.8), and Very high (0.8-1) suitability area of T. absoluta. Finally, the generated maps were subjected to simple statistical analysis to determine the trend in T. absoluta infestation classes. The results suggest an overall increase and decrease in classes of infestation. Specifically more than a 3% increase in area by classes from 2015 to 2020, with a loss of more than 4% during the same time period. The findings will improve the utilisation and application of remote sensing data in ecology to create accurate decision support maps to assist agricultural practitioners in targeting appropriate pest infestation areas for the deployment of control strategies.
SAR land cover mapping experience in the HR Landcover CCI+ ECV project
A. Sorriso, D. Marzi, P. Gamba
Because of their availability in any weather conditions and their ability to capture the geometric and water-related properties of the Earth surface, Synthetic Aperture Radar (SAR) time series are increasingly used for land cover mapping and for environmental monitoring. Specifically, the huge dataset provided by Sentinel-1 constellation is particularly useful for high resolution mapping in any place of the World. During the last years, a wide variety of SAR applications have benefited from the use of the large stacks of Sentinel-1 products, and processing and methods of analysis have increased more and more in the field of remote sensing. The aim of this work is to describe the final version of the processing chain designed, tested and implemented in an operational system for high resolution and global land cover mapping in the framework of the HR Land Cover CCI+ ECV project. The processing chain was used for two specific tasks:
a) the extraction of a so-called static map for the year 2019;
b) the extraction, whenever possible because of the availability of SAR data in the past, of additional historical land cover maps every five years from 2015 backwards.
For the first task, a time series of Sentinel-1 images was considered as input, while for the second task, data from the ASAR sensor on board of the ENVISAT satellite or by the ERS-1 and -2 satellites were considered, with a considerably more limited time series and geographical coverage, unfortunately.
The processing chain for the static map, exploiting the high resolution of Sentinel-1 data, followed the structure highlighted in the figure below. The processing chain for the historical map instead was based on a Random Forest (RF) classifier used to extract all the considered land cover classes.
Test results were obtained in three different areas, two in the Amazonian Forest and one in Siberia, and were validated with respect to ground truth points manually extracted by the project team.
The complete chain includes five steps:
1. SAR pre-processing, derived by the standard SNAP chain, to radiometrically and geometrically correct the SAR sequence, and to co-register the data sets that were not perfectly aligned.
2. Multitemporal despeckling, applied according to the approach described in [1] and applied separately to four temporal segments of the yearly sequence used as input to the chain. The rational for this choice is to reduce the overall set of data and make the procedure less computationally complex, retaining at the same time the possibility to exploit the temporal trajectory of land cover samples, which is particularly important for vegetation-related classes. As the output of this chain, the original sequence was reduced to four super-images (temporal mean), extracted as intermediate product of the despeckling method.
3. SAR feature extraction, which aims at adding spatial features to the already extracted temporal features. In this case, as mentioned in [2], simple statistical features corresponding to the neighborhood of each pixel have been considered.
4. Classification, which is in turn subdivided into three parts:
• an unsupervised water extraction routine as presented in [3], applying a K-means procedure to discriminate areas with low minimum and average backscatter intensities, and high temporal variance along the year from other potential areas of interest for the water class;
• an unsupervised urban extent extraction approach based on the extraction of a single super-image for the whole year to which the algorithm described in [4] is applied;
• a supervised classification implemented by means of a RF classifier, trained by samples manually extracted from the team on the basis of the previous existing and coarser land cover maps.
5. A merging module, aimed at composing the final land cover map by spatially combining the three maps extracted in the previous step.
The experimental tests were performed on three areas, each one of the size of a Sentinel-2 tile, in different regions of the Earth in order to check the performance of the approach with respect to very different land cover environments. The overall accuracy values obtained for the static maps are 62%, 68% and 54%, for Siberia and the two Amazonian tiles, respectively.
With respect to the historical map, the classification step has been performed with the RF classifier only, indeed, the water extraction and urban extent unsupervised techniques do not perform well where the number of images is low, which is increasingly the case moving backwards from 2015. For instance, for the Siberia test site the historical maps were computed using SAR only in the following years: 2015, 2010, 2005 and 1995. Moreover, in some of these years the coverage of the tile was not complete.
In conclusion, the described processing chain shows consistent performance for land cover classification maps on a global scale, but the classification accuracies are not very appealing. Still, the unsupervised urban and water detectors are instrumental to achieve better classification performances, since outliers and misclassification errors in these classes are strongly reduced with respect to the supervised classification chain.
These numbers confirm that SAR time series may contribute to classify specific classes that are not detected in an equally easy manner using multispectral data. For most of the other classes, instead, fusion of SAR and multispectral data is the key to achieve acceptable classification results.
The pipeline described in this work was chosen for the classification step due to its significant generalization ability coupled with easiness of implementation and reduced request for training samples with respect to more accurate but more complex and computationally demanding deep learning-based classifiers [5].
References
[1] W. Zhao, C.-A. Deledalle, L. Denis, H. Maıtre, J.-M. Nicolas, and F. Tupin, “Ratio-based multitemporal SAR images denoising: RABASAR”, IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 6, pp. 3552–3565, Jun. 2019.
[2] A. Sorriso, D. Marzi, P. Gamba, A General Land Cover Classification Framework for Sentinel-1 SAR Data, Proc. of IEEE, the online Forum on Research and Technologies for Society and Industry Innovation for a smart world - IEEE RTSI 2021, September 6-9, 2021, Naples, unformatted CD-ROM.
[3] D. Marzi and P. Gamba, "Inland Water Body Mapping Using Multi-temporal Sentinel-1 SAR Data," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, doi: 10.1109/JSTARS.2021.3127748.
[4] G. Lisini, A. Salentinig, P. Du, P. Gamba, “SAR-based urban extents extraction: from ENVISAT to Sentinel-1”, IEEE J. of Selected Topics in Applied Earth Observation and Remote Sensing, vol. 11, no. 8, pp. 2683-2691, Aug. 2018.
[5] N. Yokoya, et al., “Open data for global multimodal land use classification: Outcome of the 2017 IEEE GRSS data fusion contest”, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, pp. 1-15.
Savannas, characterized by the co-dominance of trees, shrubs, and grasses, cover approximately 20% of Earth's land surface. They are globally important ecosystems for biodiversity and the livelihoods of millions of people. The Greater Maasai Mara Ecosystem (GMME) in Kenya is an iconic savanna ecosystem of high importance as natural and cultural heritage, notably by including the largest remaining seasonal migration of African ungulates and the semi-nomadic pastoralist Maasai culture. Comprehensive mapping of vegetation distribution and dynamics in GMME is important for understanding ecosystem changes across time and space since recent reports suggest dramatic declines in wildlife populations alongside troubling reports of grassland conversion to cropland and habitat fragmentation due to increasing small-holder fencing. Here, we present the first comprehensive vegetation map of GMME at high (10-m) spatial resolution. The map consists of nine key vegetation cover types (VCTs), which were derived in a two-step process integrating data from high-resolution WorldView-3 images (1.2-m) and Sentinel-2 images (10-m) using a deep-learning workflow. We evaluate the role of anthropogenic, topographic, and climatic factors in affecting the fractional cover of the identified VCTs in 2017 and their MODIS-derived browning/greening rates in the preceding 17 years at 250-m resolution. Results show that most VCTs showed a preceding greening trend in the protected land. In contrast, the semi- and unprotected land showed a general preceding greening trend in the woody-dominated cover types, while they exhibited browning trends in grass-dominated cover types. These results suggest that woody vegetation densification may be happening across much of the GMME, alongside vegetation declines within the non-woody covers in the semi- and unprotected lands. Greening and potential woody densification in GMME is positively correlated with mean annual precipitation and negatively correlated with anthropogenic pressure. Increasing woody densification across the entire GMME in the future would replace high-quality grass cover and pose a risk to the maintenance of the region's rich savanna megafauna, thus pointing to a need for further investigation using alternative data sources. The increasing availability of high-resolution remote sensing and efficient approaches for vegetation mapping will play a crucial role in monitoring conservation effectiveness as well as ecosystem dynamics due to pressures such as climate change.
Owing to continued interests and needs in land cover monitoring, global land cover mapping (GLC) efforts have seen accelerated progress over the last three decades, since the first satellite-based GLC map was produced in 1994. Recent advances in satellite data acquisition and processing capabilities have led to the release of GLC maps at a higher resolution (10m) based on Sentinel data. These include the FROM-GLC10 map for 2017 based on Sentinel-2 imagery by Tsinghua University in China (Gong et al. 2019), the ESRI 2020 Land Cover map based on Sentinel-2 imagery (ESRI 2021), and the ESA WorldCover 2020 map based on Sentinel 1 and 2 imagery produced by European Space Agency (Zanaga et al. 2021). The Dynamic World product based on Sentinel 2 data is also expected to be released by Google in upcoming months.
However, the co-existence of multiple maps may create confusion for the map users in choosing a suitable GLC map for their application. Therefore, a comparative analysis of contemporary 10m resolution GLC maps will be useful to inform users about the differences between existing products and their strengths and weaknesses. Map validation at 10m resolution has its challenges due to possible geolocation mismatch between the map product and the validation dataset. Validation datasets are often created by visual interpretation of very high-resolution imagery that is not free of error in terms of geolocation. These errors can have an impact on the accuracy assessment and therefore, geolocation errors should be taken into consideration when validating GLC maps, particularly at high resolution.
This study presents comparative accuracy assessments of existing 10m resolution GLC maps. After addressing the differences in the land cover class descriptions between the classes, the 10m resolution maps are assessed using the Copernicus Global Land Service- Land Cover (CGLS-LC) validation data (Tsendbazar et al. 2021). This dataset is a multi-purpose dataset suitable for validating maps with 10 -100m resolution. The validation dataset consists of about 21000 locations (primary sample units-PSU) with each sample location containing 100 secondary sampling units (SSU). The PSUs correspond to 100x100m areas, while the SSUs correspond to 10x10m areas. The SSUs can be used to validate GLC maps at 10m resolution. To assess the potential effect of geolocation errors, 10m SSUs are investigated including their neighbouring SSUs in the validation data. Depending on the heterogeneity of neighbouring SSUs in terms of land cover classes, different scenarios are used to calculate the accuracy of a 10m resolution GLC map. The same approach is used to validate the existing 10m resolution GLC maps to allow comparison. The overall and class accuracies are calculated both at global and continental levels. The validation methodology and results obtained for the existing 10m resolution GLC maps are presented in this presentation.
With current developments in generating GLC maps in 10m resolution map, understanding the differences and strengths of existing maps is important for both the map users and producers. Furthermore, with the developments in map generation, challenges in validating high-resolution maps should also be addressed to support transparent and internationally accepted map quality assessments.
Index Terms—global land cover maps, map comparison, validation, and 10m resolution
References:
ESRI. (2021). Esri 10-Meter Land Cover.2021,July 5, https://livingatlas.arcgis.com/landcover/
Gong, P., Liu, H., Zhang, M., Li, C., Wang, J., Huang, H., Clinton, N., Ji, L., Li, W., Bai, Y., Chen, B., Xu, B., Zhu, Z., Yuan, C., Ping Suen, H., Guo, J., Xu, N., Li, W., Zhao, Y., Yang, J., Yu, C., Wang, X., Fu, H., Yu, L., Dronova, I., Hui, F., Cheng, X., Shi, X., Xiao, F., Liu, Q., & Song, L. (2019). Stable classification with limited sample: transferring a 30-m resolution sample set collected in 2015 to mapping 10-m resolution global land cover in 2017. Science Bulletin, 64, 370-373
Tsendbazar, N., Herold, M., Li, L., Tarko, A., de Bruin, S., Masiliunas, D., Lesiv, M., Fritz, S., Buchhorn, M., Smets, B., Van De Kerchove, R., & Duerauer, M. (2021). Towards operational validation of annual global land cover maps. Remote Sensing of Environment, 266, 112686
Zanaga, D., Van De Kerchove, R., De Keersmaecker, W., Souverijns, N., Brockmann, C., Quast, R., Wevers, J., Grosu, A., Paccini, A., Vergnaud, S., Cartus, O., Santoro, M., Fritz, S., Georgieva, I., Lesiv, M., Carter, S., Herold, M., Li, L., Tsendbazar, N.-E., & Arino, O. (2021). ESA WorldCover 10 m 2020 v100. Zenodo
Monitoring changes in Earth’s surface is important for understanding various processes on the Earth’s ecosystems and for implementing appropriate measures targetting challenges such as climate change and sustainable development. Accordingly, with the advancements in satellite-based land monitoring, many research has been done on land change monitoring from local to global scale and targetting particular land cover types, e.g., forest and water (Hansen et al. 2013; Pekel et al. 2016). Operational land cover monitoring efforts have also produced global land cover maps with regular updates allowing to monitor changes in land cover at a generic level. For example, the Copernicus Global Land Service (CGLS) Dynamic Land Cover project produced yearly global land cover maps from 2015 to 2019 at 100m resolution. Since map uncertainty-related inconsistencies may be regarded as change when comparing multitemporal land cover maps, monitoring changes in land cover at a generic level can be challenging due to spurious changes and also variation in land cover transitions throughout the world.
This study aims to improve land cover change monitoring at a global scale in recent years. To do so, we targeted the following: (I) to develop an advanced time series based algorithm (BFAST-Lite) suitable for large scale change monitoring (II) to improve land cover change detection by combining BFAST-Lite and machine learning algorithms and (III) to estimate the area of changes in global land cover in recent years.
We developed a new unsupervised time series change detection algorithm that is derived from the original BFAST (Breaks for Additive Season and Trend) algorithm (Masiliūnas et al. 2021). The focus of this new algorithm was on speed and flexibility to make it suitable for upscaling for global land cover change detection. The algorithm was tested on an eleven-year-long time series of MODIS imagery, using a global reference dataset with over 30,000 point locations of land cover change to validate the results. The global reference dataset was collected as part of the CGLS Dynamic Land Cover project.
Next, we combined BFAST-Lite with the random forest algorithm to improve land cover change detection at a glocal scale by combining unsupervised and supervised approaches for change detection (Xu et al. 2021). We further compared the performance of three satellite sensors: PROBA-V, Landsat 8 OLI, and Sentinel-2 MSI at global scale change monitoring using the global reference dataset for land cover change.
In addition, we aimed to statistically estimate the area of land cover change in recent years at a global scale. To do so, we used the CGLS-LC100 yearly maps 2015-2019 and the CGLS global validation dataset (Figure 1) (Tsendbazar et al. 2021) to estimate the area of land cover change at a global scale by accounting for the bias of mapped products with help of reference datasets.
The approach and obtained results of these studies related to global land cover change monitoring are presented to highlight the progress of land cover change monitoring as well as challenges that need further attention towards accurate monitoring of global land cover change.
Index Terms— land cover change, change monitoring, generic land cover, change area estimation
Hansen, M.C., Potapov, P.V., Moore, R., Hancher, M., Turubanova, S.A., Tyukavina, A., Thau, D., Stehman, S.V., Goetz, S.J., Loveland, T.R., Kommareddy, A., Egorov, A., Chini, L., Justice, C.O., & Townshend, J.R.G. (2013). High-Resolution Global Maps of 21st-Century Forest Cover Change. Science, 342, 850-853
Masiliūnas, D., Tsendbazar, N.-E., Herold, M., & Verbesselt, J. (2021). BFAST Lite: A Lightweight Break Detection Method for Time Series Analysis. Remote Sensing, 13
Pekel, J.-F., Cottam, A., Gorelick, N., & Belward, A.S. (2016). High-resolution mapping of global surface water and its long-term changes. Nature, 540, 418-422
Tsendbazar, N., Herold, M., Li, L., Tarko, A., de Bruin, S., Masiliunas, D., Lesiv, M., Fritz, S., Buchhorn, M., Smets, B., Van De Kerchove, R., & Duerauer, M. (2021). Towards operational validation of annual global land cover maps. Remote Sensing of Environment, 266, 112686
Xu, L., Herold, M., Tsendbazar, N.-E., Masiliūnas, D., Li, L., Lesiv, M., Fritz, S., & Verbesselt, J. (2021). Time series analysis for global land cover change monitoring: a comparison across sensors. Remote Sensing of Environment, (under review)
The ESA-CCI High Resolution (HR) Land Cover(LC) project developed LC maps at HR (10 m) every five years between 1990 and 2019 and LC yearly change (LCC) over three regions for which climate - LC interactions are known to be significant: Amazonia, Siberia and Sahel. These maps have been used in the ORCHIDEE land surface model to map the fifteen Plant Functional Types (PFTs) that are used to describe the land cover variability within a model grid cell. For that purpose, the fifteen HRLC classes have been interpreted in terms of the ORCHIDEE PFTs, using auxiliary information as climate ecozones provided by Köppen-Geiger classification (Kottek et al., 2006), C3/C4 grasses and crops partitioning following Still et al., 2014 for grasslands and the Land Use Harmonization database (LUH2v2h, Hurtt et al., 2020) for crops. Then, yearly PFTs maps were generated over the studied regions and compared to our previous PFT maps based on the Medium Resolution LC product (ESA. Land Cover CCI Product User Guide Version 2. Tech. Rep. (2017) and Lamarche et al., 2017) provided at 300 m resolution and on a yearly basis between 1992 and 2020. The results show significant differences related to the partition of evergreen/deciduous tree and shrub species and to the fractions of grasses and crops and bare soil.
Besides, the HR information allowed us to revise the albedo parameterization in ORCHIDEE. As a matter of fact, some deficiencies were identified related to the soil background values which were not correctly optimized in the cases of pixels densely vegetated all year long. In such cases, the satellite observations used for the calibration are not influenced by the underlying soil and the optimization of the soil albedo fails. Moreover, the model is not able to reproduce the albedo changes linked to land cover changes such as deforestation events or grass/crop transitions. Therefore, a new calibration methodology has been developed to improve the albedo parameters calibration, better constrain the parameter space and allow the regionalization of the parameters (Bastrikov et al., in preparation). The improvements brought by these changes will be presented.
Thanks to this new parameterization, land cover changes and their impacts on the climate as well as climate change impacts on the vegetation have been studied through a set of forced and coupled model simulations. Thanks to the IPSL atmospheric model (LMDZ) zoomed and wind-nudging capacities, high resolution global simulations over our three studied regions have been performed. Three configurations of the LMDZ model were developed: a zoom factor of 5 was chosen to increase the model grid resolution by a factor of 5 in the center of the studied regions and wind atmospheric fields from ERA5 atmospheric reanalysis were acquired to nudge the atmosphere dynamics to the observed one (Cheruy et al., 2013). In this configuration, the model grid resolution inside the zoom is decreased up to a few tens of kilometers and short term simulations are sufficient to study the surface-atmosphere feedback.
Various simulations were performed over each region to study the impacts of LCC on the atmosphere over the time period 1990 - 2015. Different scenarios have been studied: static LC maps for 1990 and 2019 years, and yearly updated ones. Different configurations of ORCHIDEE coupled with LMDZ and in standalone mode (forced by atmospheric reanalysis) were also performed to assess the atmospheric feedback. The comparison of them allowed to highlight the impacts of LCC on the atmospheric temperatures and precipitation and the role of the atmosphere on the magnitude of the impacts. For example, preliminary results are showing that the land cover changes may have different impact in coupled compared to forced mode. An albedo decrease linked to afforestation for example, will result into larger sensible and latent heat fluxes, lower soil moisture in forced mode, whereas in coupled mode, the increased latent heat fluxes may translate into more precipitation, larger soil moisture and LAI, leading to even lower albedo values and larger surface and air temperature changes compared to the forced simulations. Other interesting features are under analysis and will be presented at the symposium. The benefits/drawbacks of the HRLC product compared to medium resolution one will be finally discussed.
References:
Bastrikov, V., San Martin, R., C. Ottlé and P. Peylin, Calibration of albedo parametrisations in ORCHIDEE based on various satellite products, in preparation for Geosci. Model Dev.
Cheruy, F., Dupont, J. C., Campoy, A., Ducharne, A., Hourdin, F., Haeffelin, M., & Chiriaco, M. (2013). Combined influence of atmospheric physics and soil hydrology on the realism of the LMDz model compared to SIRTA measurements. Clim. Dynam, 40, 2251-2269.
Hurtt, G. C., Chini, L., Sahajpal, R., Frolking, S., Bodirsky, B. L., Calvin, K., ... & Zhang, X. (2020). Harmonization of global land use change and management for the period 850–2100 (LUH2) for CMIP6. Geoscientific Model Development, 13(11), 5425-5464.
Kottek, M., Grieser, J., Beck, C., Rudolf, B., & Rubel, F. (2006). World map of the Köppen-Geiger climate classification updated.
Medium Resolution Land Cover product, ESA. Land Cover CCI Product User Guide Version 2. Tech. Rep. (2017).
Lamarche, C., Santoro, M., Bontemps, S., d’Andrimont, R., Radoux, J., Giustarini, L., Brockmann, C., Wevers, J., Defourny, P. and Arino, O., 2017. Compilation and validation of SAR and optical data products for a complete and global map of inland/ocean water tailored to the climate modeling community. Remote Sensing, 9(1), p.36.
Still, C. J., Pau, S., & Edwards, E. J. (2014). Land surface skin temperature captures thermal environments of C3 and C4 grasses. Global ecology and biogeography, 23(3), 286-296.
List of the HRLC working group members: L. Bruzzone (UniTN), F. Bovolo (FBK), M. Zanetti (FBK), C. Domingo (CREAF), L. Pesquer (CREAF), K. Meshkini (FBK), C. Lamarche (UCLouvain), P. Defourny (UCLouvain), P. Gamba (UniPV), L. Agrimano (Planetek), A. Amodio (Planetek), M. A. Brovelli (PoliMI), G. Bratic (PoliMI), M. Corsi (eGeos), G. Moser (UniGE), C. Ottlé (LSCE), P. Peylin (LSCE), R. San Martin (LSCE), V. Bastrikov (LSCE), P. Pistillo (EGeos), I. Podsiadlo (UniTN), G. Perantoni (UniTN), M. Riffler (GeoVille), F. Ronci (eGeos), D. Kolitzus (GeoVille), Th. Castin (UCLouvain), L. Maggiolo (UniGE), David Solarna (UniGE).
Namibia is a semi-arid country with highly variable and unpredictable rainfalls. Extreme weather patterns such as floods or extensive droughts have increased in the past years with strong impact on surface and ground water availability, rangeland and agricultural productivity, food security and further land degradation such as bush encroachment or soil erosion. These conditions especially impact livelihoods in the northern communal areas as most people live in subsistence economy which is closely connected to hydrological conditions. The past 10 years were characterized through a perennial drought lasting from 2013 to 2016 and an extreme drought occasion during the rainy season of 2018/2019, which was the driest in 90 years. In contrast, January 2021 saw rainfall totals double to triple the norm. The poster presents a comparative analysis of five selected agricultural drought indices (VCI - Vegetation Condition Index; VHI - Vegetation Health Index; TCI - Temperature Condition Index; TVDI - Temperature Vegetation Dryness Index and DSI – Drought Severity Index) to identify, visualize, monitor and better understand the nature, characteristics and spatial-temporal patterns of drought in northern Namibia. The indices are based on freely available MODIS (Moderate Resolution Imaging Spectroradiometer) satellite imagery and value-added data products allowing calculation, time series analysis and cross-comparison based on their sensitivity towards vegetation greenness, land surface temperature and evapotranspiration. The indices are complemented and compared to climate reanalysis data (Copernicus ERA-5) for visualisation and analysis of rainfall patterns. The presented time series analysis covers a time span of 20 years (2001 to 2021), visualizing drought indices for the past 10 years following a seasonal approach. The study provides a better understanding of spatial drought patterns through identification of drought-prone areas in northern and central Namibia and bordering countries. Results show that droughts happen every year with vegetation-based indices showing similar spatial patterns but different levels of classified drought intensity. A longitudinal increase of index values from East to West and a latitudinal increase from North to South following the rainfall gradient can be observed. These highly correlate with precipitation reanalysis data. Combined indices based on evapotranspiration and land surface temperature show higher temporal and spatial fluctuations of drought intensities. It is concluded that a comparative analysis of multiple indices represents a better interpretation of drought than systems focusing on single parameters and combined drought indices are probably more suitable for arid and semi-arid areas than indices purely based on vegetation health. Future research should additionally incorporate biophysical properties such as soil characteristics, soil moisture and hydrology flanked by socio-economic investigations to establish an integrated drought index for northern Namibia.
Keywords: Remote Sensing, MODIS, Drought Indices, Time Series Analysis, Climate Reanalysis, Namibia
Nowadays the use of Machine Learning (ML) and Deep Learning (DL) are widely used in Earth Observation, especially for land cover classification. Different studies obtained slightly higher overall accuracies of DL than ML. However, the full potential of using machine learning, including the variety of algorithms and their calibration parameters, have not been fully addressed. Therefore, this research addresses a comprehensive discussion of the accuracies of ML and DL algorithms, in depth, in land cover classification. For this, radar data from Sentinel-1 images, with HH, HV, VV and VH polarizations were used as input variables in this study. The ML algorithms competed with each other through a Monte Carlo Cross-Validation (MCCV) calibration, so that the best algorithm found in the calibration (i.e., with the highest overall accuracy) was put in competition with the DL algorithm. The discussions, in this versus learning competition, focused on the global accuracies found, as well as the execution time obtained for both areas of artificial intelligence focused for the extension of the study area. Study area is located in northern Catalonia Spain, and classes such as crops, wetlands, urban areas, dense forest and scrubland were mapped in order to achieve spectral and classes variety. In addition, ground truth data from COPERNICUS and high-resolution images were used for validation of the obtained mappings. This article comes with a python package not yet available that was developed to implement several tools such as machine learning algorithms calibration through MCCV, Leave One Out Cross-Validation, Cross-Validation, etc., DL classifications, time series change detection, atmospheric correction, deforestation detection, land degradation mapping, among others algorithms embedded in the package. Finally, some remarks are given about pros and cons of using ML and DL in Earth Observation for land cover classification, as well as benefits of using radar imagery instead of optical imagery to map land cover.
Operational yield forecasting services are often based on regressive estimation models between official yields and agro-environmental variables, computed at the time of the forecast (Fritz et al., 2018). The relationship usually relies on historical series of statistical yields and one or more regressors, selected among meteorological data, crop simulation model or satellite-derived indicators.
The fitting between estimators and crop yields is highly variable across the agronomical seasons and the reliability with which these variables infer on yields depends, among many other factors, also on their aggregation in the space-time domains. For example, on the quality and representativeness of the utilized agricultural land cover masks.
In particular, the contribution of satellite-based indicators concerns the sensitivity they have on the combined agro-climatic, genetic, environmental and management effects on crop biomass activity. Nevertheless, remote sensing indicators show limits in their applications due to land cover maps availability (pixel selection - Liu et al, 2019) or due to the bias introduced when mixed-pixels are considered (low-resolution bias - Boschetti et al., 2004).
Recent studies (Weissteiner et al., 2019), proposed a semi-automatic approach to identify crop group-specific pure pixels (i.e., winter and spring crops and summer crops) at European scale, based on the implementation of a regional Gaussian Mixture Model (GMM) on MODIS–NDVI time series analyses. Such input could be used to improve the predictability of crop monitoring and yield forecasting applications, as it introduces a new and more detailed information layer to agricultural land cover.
This work focuses on the contribution of MODIS time series for regional yield estimation in Europe. We compared the linkage between yield and remote sensing indicators if generic arable land masks or crop-group specific information are applied when satellite data are aggregated at regional level. We regressed regional crop yields against smoothed daily NDVI temporal profiles, with the aim to address the following research questions:
(1) Is there any added value in the exploitation of yearly crop group-specific land cover for the estimation of crop yields?
(2) Can we take benefit all over Europe?
(3) Can we take benefit equally for both the identified crop categories?
Our study area includes all the European Union (EU) member states except for Finland and Malta (due to the low rate of arable land). We selected for each EU country the five most representative regions (Nomenclature des Unités territoriales statistiques - NUTS) in terms of arable land area according to Corine Land Cover (CLC) agricultural classes. Thus leading to a total of 97 NUTS2 regions, representing 72% the EU arable land. For each considered region, crop yield statistics NUTS-2 level were collected from official databases for the 2003-2019 period and used as reference data for computing regressions. Yield time series refer only to the prevalent crops inside each region: the main agricultural crops were first divided into two crop groups, namely Winter and Spring Crops (WSpCs) and Summer Crops (SCs), then the most representative crop of the two groups was chosen, according to the average extent of the cultivated area for each selected region. Yield statistics were checked for the presence of trends by means of a Mann-Kendal test (Mann, 1945; Kendall, 1975).
A collection of NDVI representative temporal profiles was derived for every selected region and year using MODIS MOD09GQ.006 daily product at 250-m spatial resolution. Single pixels NDVI time series were modeled by interpolating cloud free observations with a 5th degree polynomial fitting, while regional reference profiles were retrieved by averaging single pixels time series according to the information of five different land cover masks (Weissteiner et al., 2019):
i. ArLand: generic arable land mask, derived from CLC. It provides a static and generic land cover information without considering for annual variations or crop groups.
ii. Hist_Sc: historical crop group-specific mask for the SC group. Indicating high historical probability of SC cultivation for a given pixel. It provides a dynamic and crop group-specific land cover information.
iii. Hist_WSpC: it is the equivalent of the Hist_Sc mask, but for the WSpC group. It provides a static and crop group-specific land cover information.
iv. SC_year: yearly based crop group-specific mask for the SC group. Representing the pure pixels for the SC groups detected in a specific year. It provides a dynamic and crop-group specific land cover information.
v. WSpC_year: yearly based crop group-specific mask for the WSpC group. It provides a dynamic and crop-group specific land cover information.
A compared correlation analyses, between yield data and NDVI regional temporal profiles extracted from different land cover masks was performed. A linear regression model was assumed. The reference time stamp for regressions was of 10 days, from Day Of the Year (DOY) 60 to DOY 270. The R2 coefficient of determination was calculated to assess the strength of each relationship, together with the respective ρ-value to estimate its significance. The indicators Root Mean Squared Error (RMSE) and Mean Absolute Standard Error (MASE) were computed to assess the model errors in prediction.
Results were discussed in view of their applicability for regional-scale monitoring systems, with particular attention to the effects of crop group-specific land cover on the accuracy of yield estimation. A general improvement in correlation results was found when using yearly crop group-specific indicators with respect to generic and static ones. Improvements were found both in the accuracy and timeliness of predictions and were more evident at regional than EU scale level. The added value in the use of crop group-specific land cover layers included the two European most cultivated crops (i.e. grain maze and soft wheat). In particular, it was noticed for SC group a better performance in terms of prediction accuracy (higher R2 values), while concerning the WSpC group, advantages were more pronounced in terms of prediction timeliness (high R2 values earlier in time).
Bibliography:
Fritz, S., See, L., Bayas, J. C. L., Waldner, F., Jacques, D., Becker-Reshef, I., ... & Rembold, F. (2019). A comparison of global agricultural monitoring systems and current gaps. Agricultural systems, 168, 258-272.
Boschetti, L., Flasse, S. P., & Brivio, P. A. (2004). Analysis of the conflict between omission and commission in low spatial resolution dichotomic thematic products: The Pareto Boundary. Remote sensing of environment, 91(3-4), 280-292.
Kendall, M. G. (1975) - Rank Correlation Methods, 4th ed. Charles Griffin, London.
Liu, J., Shang, J., Qian, B., Huffman, T., Zhang, Y., Dong, T., ... & Martin, T. (2019). Crop Yield Estimation Using Time-Series MODIS Data and the Effects of Cropland Masks in Ontario, Canada. Remote Sensing, 11(20), 2419.
Mann, H. B. (1945) - Nonparametric tests against trend. Econometrica: Journal of the Econometric Society, 245-259.
Weissteiner, C. J., López-Lozano, R., Manfron, G., Duveiller, G., Hooker, J., van der Velde, M., & Baruth, B. (2019). A Crop Group-Specific Pure Pixel Time Series for Europe. Remote Sensing, 11(22), 2668.
Human induced land degradation has become a chief driver of poor ecological functioning and reduced productivity. The process of land degradation needs to be understood at various scales in order to protect ecosystem services and communities directly dependent on it. This is especially true for sub Saharan Africa, where socio economic and political factors exacerbate ecological degradation. This study aims to identify land change dynamics in the Copperbelt province of Zambia and unveil proximate causes and underlying drivers. Copperbelt is a densely forested province (falling in the central miombo ecoregion) with diverse ongoing and imminent land change processes such as shifting cultivation, charcoal production, logging, industrialization, mining, and extension of both urban and rural settlement. Specific to sub-Saharan Africa, many of the processes are superimposed by human-driven fire dynamics such as end of dry season fires. In our study, monthly time series of MODIS (MODerate resolution Imaging Spectroradiometer) derived enhanced vegetation index (EVI) values were used to derive three relevant parameters; harmonic series, annual peaking magnitude and annual mean growing season. We used a semi automated scheme to map land changes, where trend calculation was a statistical output, carried out with the help of additive decomposition and linear regression. A spatial filter was used to select only those pixels with minimum two significant trend patterns to enhance robustness of the approach and only consider regions with tangible change dynamics for further analysis. Trend maps were further integrated in a knowledge driven approach where additional data sources (socio economic, land cover, tree cover, bi temporal Landsat vegetation indices, high resolution Bing and Google imagery) were incorporated to provide spatial context and map the prevalent land change dynamics. Our observations were as follows: (a) trends of annually aggregated series were statistically more significant than those of monthly series, (b) weak trends were more dominant over strong trends, with weakly positive being the most prominent, (c) there was a clear spatial differentiation: 15% of the study area, dominant in the East, showed positive trends, 3%, dominant in the West, showed negative trends; (d) natural regeneration in mosaic landscapes was chiefly responsible for positive trends, (e) restorative plantations contributed to recovery of degraded cultivated areas, (f) mixed trends over forest reserves depicted timber and fuelwood harvest and (f) degradation over intact woodland and cultivation areas contributed to negative trends. In addition, lowered productivity within semi-permanent agriculture and a shift of new encroachment into woodlands from East to West of Copperbelt was observed. Although prominent in isolated spots, pivot agriculture was not a main driver for land changes at large. Concluding, greening trends prevailed over the study site, however the risk of intact woodlands being affected by various disturbances remains high.
Land cover maps are being increasingly produced. In particular, high-resolution land cover is taking over the medium and low-resolution products. The increase in production is on one side driven by the need for land cover information, and on the other side by favorable state of associated technologies (i.e., multiple high-resolution satellite missions, short revisit time, increased computational capabilities, etc.). Even though land cover production has increased, there are still some open issues regarding the land cover that needs to be addressed in order to facilitate land cover production. One of the issues is the collection of reference data for training and validation, which is especially challenging in the case of global high-resolution land cover. In most situations, such data are collected by photo-interpretation of Very High-Resolution (VHR) satellite imagery; rarely, the source of the reference data is an in-situ collection.
The objective of our work is to demonstrate how information from existing land cover datasets can be used as training data to produce the new land cover datasets and how accurate the outcomes are. The idea behind the work is that every land cover map aims at representing material on the Earth's surface as accurately as possible. When existing datasets are compared among themselves, the area in which they all show consistent information is the area with the highest probability that they are correct. Correctly classified pixels have a large probability to appear in the same location in the different datasets given that correct classification is a target in the classification procedure, and rarely a result of a random guess. On the opposite, the errors in the land cover might be a function of imagery type, preprocessing, training data, classification algorithm, etc. Since different land cover maps are produced with different procedures and input data, it is reasonable to assume that errors in the different datasets are random, i.e., not correlated. Therefore, if we intersect multiple land cover maps, the areas in which they share information can be used to extract training samples for deriving a new land cover map.
The workflow of our work comprises data preparation, dataset intersection, stratified random extraction of training samples, classification, and validation (Figure 1 in Illustrations file). The area of interest is an area of 38292 km2 distributed among 50 squares with the size of about 766km2 located in Central and Eastern Africa. The area of interest was selected based on the availability of validation samples that are needed for the final phase – accuracy assessment.
In this area, we collected existing high-resolution land cover (HRLC):
Following existing HRLC were used in our work:
• S2 prototype land cover 20m map of Africa 2016 (CCI Africa Prototype) at 20m resolution for the year 2016 with classes: Tree cover areas, Shrubs cover areas, Grassland, Cropland, Vegetation aquatic or regularly flooded, Lichens Mosses / Sparse vegetation, Bare areas, Built-up areas, Snow and/or Ice, and Open Water
• Forest / Non-Forest (FNF) at 30m resolution for the year 2017 with classes: Forest, Water, and Non-forest
• Finer Resolution Observation and Monitoring of Global Land Cover (FROM GLC) at 10m resolution for the year 2017 with classes: Cropland, Forest, Grassland, Shrubland, Wetland, Water, Tundra, Built-up, Bareland, Permanent ice, and snow and
• Global Human Settlement Built-Up Grid – Sentinel-1 (GHS BU S1NODSM) at 20m resolution for the year 2016 with classes: Built-up, and Non-built up
• GlobeLand30 (GL30) at 30m resolution for the year 2017 with classes: Cropland, Forest, Grassland, Shrubland, Wetland, Water, Tundra, Built-up, Bareland, and Permanent ice, and snow
• Global Surface Water (GSW) at 30m resolution for the year 2019 with classes: Permanent water, Seasonal water, and Not-water
All the datasets have high resolution (30m or better) that is similar to the resolution of the satellite imagery used in the classification. The datasets are produced by different producers, they refer to different years, they have different resolutions, classes, etc. To extract information that is coherent for all maps it was necessary to harmonize them in terms of the coordinate reference system, legend, and resolution. The coordinate reference system selected was WGS84 (EPSG:4326) and all datasets were reprojected to 10m resolution. The legends (pixel values and labels) of the existing classes were adapted to correspond to 5 - Shrubland, 7 - Grassland, 8 - Cropland, 9 - Wetland, 12 - Bareland, 13 - Built-up, 15 - Water, 17 - Permanent ice and snow, 20 – Forest. Then, all data were intersected and only areas in which they show coherent information were extracted. The map obtained in this way we named map of agreement and it accounts for 20% of the area of interest. Most of the classes available in the region were also available in the map of agreement i.e., Forest, Grassland, Cropland, Water, Built-up, Shrubland. However, there were only a few samples of Bareland and Wetland class available – (22 and 6 pixels respectively), and samples of Permanent ice and snow were not present at all since the only dataset containing this class in the area of interest is GlobeLand30.
The training set was created by extracting around 8000 samples per class from the map of agreement, except for classes Bareland and Wetland for which the number of pixels available was lower than this threshold, so for these classes all the available samples were taken into account.
Data preparation and extraction of the training samples were done using GRASS GIS and Python on CINECA High-Performance Computing (HPC).
The extracted samples were used for the classification of the Sentinel-2 and Planet's NICFI (Norway's International Climate and Forest Initiative) Basemap for 2017 with a Random Forest classification algorithm. For this purpose, Google Earth Engine (GEE) was used because both imageries are already available in the Earth Engine Data Catalog, but to access Planet’s NICFI data it was necessary to sign up and accept terms of use. In the case of Sentinel-2, two tests were made – one with using only Red, Blue, Green and NIR band (hereafter called test Sentinel-2 4B), and one with using all bands at 10 m and 20 m resolution (hereafter called test Sentinel-2 allB). In case of Planet’s NICFI data only Red, Blue, Green, and NIR bands were available and used (hereafter called test Planet 4B).
Finally, the validation dataset was created by photo-interpreting VHR imagery in 2400 sample locations. The photo-interpretation was done by using Open Foris, Collect Earth, and Google Earth tools. 1300 samples were extracted in the area where the map of agreement has valid values, and 1100 in other areas within the area of interest. For some of the points, the photo interpreter was not completely confident about the label assigned, therefore such samples were discarded during validation. Finally, the validation was carried out based on 1683 highly confident samples distributed as 1050 in the map of agreement and 633 in other areas within the area of interest.
The results of the 3 different classification tests yielded Overall Accuracy (OA) of 70% in the case of Planet 4B test, 67% in the case of Sentinel-2 4B test, and 74% in the case of Sentinel-2 allB test. The error matrix and associated accuracy indexes – User’s accuracy (UA) and Producer’s accuracy included in the Illustrations file as Table 1. The classes with the highest accuracy are Water, Built-up, and Forest, while in the case of Grassland, Shrubland, and Cropland the accuracy is moderate. For the classes of Bareland, and Permanent ice and snow the accuracy is 0, but it is also based on a very small number of samples, therefore it is not enough representative.
The use case presented here is a demonstration of how the existing data can be reused to obtain new land cover data. The accuracy of 74% is sufficiently satisfactory given that the time invested into training data extraction is significantly reduced with respect to typical procedures (i.e., photo-interpretation) for the same number of samples. Furthermore, global land cover changes are only a few percent per year, therefore it is safe to use existing land cover several years older than the baseline year of the land cover that is to be produced. One limitation of the approach is that some classes that are effectively present on the ground, are not present in the map of agreement, and therefore also in the training dataset. This is typical for the small classes such as Permanent ice and snow, or others depending on the area. However, even if not all classes can be taken into account by this approach, it significantly reduces the efforts needed for collecting training data.
In our next steps, we are going to investigate different sources of information for the classes that are currently missing in the training dataset of the area of interest (i.e., Bareland, Wetland, and Permanent ice and snow).
Accessibility to raw materials, cheap labour and lenient labour laws make rural areas attractive to many industries in West Africa. The set-up of small-scale solid mineral industries is popular in rural West Africa. These industries are labour intensive and require small to large areas of land. This is just one of the examples of industrialization taking place in rural areas. Nigeria is well known for its vast oil reserves, which in turn creates a lot of employment opportunities, especially for low-skilled workers, since many of the reserves are in rural areas. Ghana's southern western region has a wealth of gold, which has caused small-scale industries to spring up and led to an influx of people from more rural areas. In combination with proximity to mineral resources, this has led to rural industrialization. This can be seen in the increase in the number of people in an area which indicates an influx of migrants. When this happens there's an upsurge in migration to rural areas, pressure on land and water resources from agricultural activities, which affects the livelihood of migrants. This study seeks to identify migrants' behaviours to move to rural industrial areas in Ghana and Nigeria using remote sensing proxies. The method will use several remote sensing products such as Landsat, Copernicus datasets, Hansen Global Forest dataset, WorldPop and JRC-Global Human Settlement Layer dataset. The Random Forest classifier will be used to generate a Landcover map of the selected areas with Copernicus and Landsat datasets. The expected result will have the potential to demonstrate that Copernicus data, World Pop and Hansen Forest Cover data can be a useful proxy for population and migration studies. Moreover, the monitored significant changes in land use and land cover in the industrial areas compared over the past 20 years reveal certain trends of the industrialization era in Western Africa. The research has the capabilities of producing effective and accurate methods for identifying the pull effects of industries in rural areas. This is essential for the implementation of policies for improved infrastructure, improved labour laws, good health and decent wages.
Global land cover mapping has aided monitoring of the complex Earth’s surface and provided vital information to understand the interactions between human activities and the natural environment. Most of the global land cover products are provided with discrete classes, indicating the most dominant land cover class in each pixel. Fraction mapping, which expresses the proportion of each land cover class in each pixel, is able to characterize heterogeneous areas covered by multiple land cover classes. However, land cover fraction maps have shown unrealistic inconsistent year-to-year changes, which makes it difficult to detect robust trends. To obtain more accurate and reliable fraction maps, temporal information can be used to correct false changes and improve the consistency of time series.
In this study, such an approach is implemented by using a Markov chain model as a postprocessing step. Based on Landsat 8 imagery and Random Forest (RF) regression, initial fraction maps are created on a global scale for the years 2015 until 2018, following the approach of Masiliūnas et al. (2021). The RF-regression model is trained on over 150,000 reference points, provided by the Copernicus Global Land Service Land Cover project (CGLS-LC100) (Buchhorn et al., 2020). A Markov chain model is then applied on the fraction maps to smooth the time series. The transition probabilities of the Markov chain model are trained on over 30,000 reference points that contain multitemporal fraction data. Moreover, a recurrent RF-regression model, which also incorporates temporal information, has been implemented as a stronger baseline method.
An accuracy assessment has been executed with a subpixel confusion-uncertainty matrix to check the performance of the models’ output. All fraction maps are validated on over 28,000 reference points that contain multitemporal fraction data, and also account for possible change areas (Tsendbazar et al., 2021). All the land cover fraction reference data are provided by the CGLS-LC100 project. The fraction maps obtained by the Markov chain model are compared to the fraction maps produced by the recurrent RF-regression model.
Based on promising results in other studies, it is expected that a Markov chain model will significantly improve the accuracy and consistency of the land cover fraction maps, with less year-to-year unrealistic changes. These anticipated results could stimulate the use of fraction maps for detecting gradual land cover changes over time, which would be relevant for monitoring forests, biodiversity and land degradation.
Buchhorn, M., Lesiv, M., Tsendbazar, N.-E., Herold, M., Bertels, L., & Smets, B. (2020). Copernicus Global Land Cover Layers—Collection 2. Remote Sensing, 12(6), 1044. https://doi.org/10.3390/rs12061044
Masiliūnas, D., Tsendbazar, N.-E., Herold, M., Lesiv, M., Buchhorn, M., & Verbesselt, J. (2021). Global land characterisation using land cover fractions at 100 m resolution. Remote Sensing of Environment, 259, 112409. https://doi.org/10.1016/j.rse.2021.112409
Tsendbazar, N.-E., Tarko, A., Li, L., Herold, M., Lesiv, M., Fritz, S., & Maus, V. (2021). Copernicus Global Land Service: Land Cover 100m: version 3 Globe 2015-2019: Validation Report. Zenodo. https://doi.org/10.5281/zenodo.4723975
An Earth Observation approach for monitoring and mapping the spatial distribution of bird habitats around the Irish Sea
Walther C.A. Camaro Garcia1,2, Fiona Cawkwell1
1. Geography Department; University College Cork (UCC), Cork, Ireland
2. MaREI, the SFI Research Centre for Energy, Climate and Marine; Environmental Research Institute (ERI), University College Cork (UCC), Ringaskiddy, Co. Cork, Ireland
The Irish Sea climate is changing, in line with regional and global trends, presenting a threat to resident and migratory marine species whose conservation depends on the preservation and maintenance of coastal habitats.
As a response to those challenges, the ECHOES (Effect of climate change on bird habitats around the Irish Sea) project, funded by the European Regional Development Fund (Ireland Wales INTERREG Programme), seeks to address how climate change will impact coastal bird habitats of the Irish Sea, and what effect this could have on the society, economy, and shared ecosystems in both Ireland and Wales.
Satellite imagery are a key source of data for monitoring and mapping the spatial distribution of key habitats for the Greenland White fronted geese and Eurasian Curlew.
Initially, a time series of cloud-free Sentinel-2 images covering the four seasons from Autumn 2019 to Summer 2020 were identified for the study sites on the west coast of Wales and south east coast of Ireland. Three radiometric indices were calculated to capture the habitat distribution and dynamics. The Normalized Difference Vegetation Index (NDVI – Sentinel-2 bands 4 and 8) highlights the condition and the seasonal variation of the vegetation. The Structure Insensitive Pigment Index (SIPI – Sentinel-2 bands 1, 4 and 8) is used to identify the high variability in the vegetation structure for some particular habitats. Finally, the Normalized Difference Water Index (NDWI – Sentinel-2 bands 3 and 8) is used to identify the presence of water in the estuary areas linked to the tidal variation. Using field points of the key habitats, a random forest (RF) classification was run for the image stack for each site and independently validated with additional field information.
In order to capture change in the habitats over the last 20 years, a time series of Landsat and Sentinel-2 were identified using the same criteria as previously. The radiometric indices used in the first phase of this study were calculated for the time series and the per-pixel trajectories plotted. Land cover changes were classified into temporary change (for example due to tidal state) and directional change, with the latter further divided into abrupt change (e.g. conversion of wetlands into agriculture) and trend change (e.g. tree growth over years) for each of the different indices. A variety of different statistical approaches were explored to determine the dynamics of the different study areas of the past two decades.
Monitoring land-cover change dynamics is a crucial but challenging task for understanding and minimizing the anthropogenic impact on climate change and ecosystems biodiversity. Existing long-time series of remote sensing data provide relevant information to observe land-cover change worldwide. Most of the current state of the art methods for land-cover change detection relies on comparing pre and post-change land cover maps. Due to noise in both the pre and post map, those approaches are only relevant to provide change statistics over vast areas. Even though those statistics are essential to understand the change tendency, they cannot be used to obtain change location at a precise spatial level, which is vital for local ecosystem management. Therefore, providing spatially accurate land-cover change maps requires considering some temporal constraints between acquisitions to avoid false alarms or missing detections. Especially differences in atmospheric conditions or acquisitions configurations between different acquisitions substantially impact optical images and, consequently, change detections, introducing specific border effects.
This paper addresses the use of different methodologies to reduce land -cover change detection errors. First of all, to consider the internal spatial variability of some land-cover classes, like logging forest or degraded forest, we use an auto contextual approach (Derksen 2020) based on multi-scale SLIC segmentation coupled with random-forest supervised classification. In addition, replacing standard wall to wall land-cover change map production that introduces some border artefact in the change map, we use posterior confidence of the random-forest classes. The confidence change map is obtained by crossing the random-forest confidence of one class for one period to other classes in other periods. Such a confidence change map allows for the application of relevant thresholds, reducing any change artefact.
Experiments are conducted on a region near Cotriguacu municipality in Mato Grosso state in Brazil near one deforestation front of the Amazonian forest. This area reveals to be especially suited to demonstrate the interest of the approach as it contains different states of forest degradation linked to a 3 stage deforestation process for cattle farming that can last over several months: the forest is firstly burnt, then part of trees are removed, and finally, the area is cleared.
The method is experimented on cloud-free Sentinel-2 data from 2018 to 2020 to compare results between different intervals and catch all the main changes in the area over the concerned period. Results demonstrate an interest in using multi-scale SLIC segmentation to catch in the heterogeneous classification classes such as strong disturbed forest and logged forest. In addition, change analyses provided by crossing the posterior confidence perform well to adjust change detection accuracy and reduce border effect.
Ref: Derksen, Dawa et al. “Geometry Aware Evaluation of Handcrafted Superpixel-Based Features and Convolutional Neural Networks for Land Cover Mapping Using Satellite Imagery.” Remote. Sens. 12 (2020): 513.
In the framework of the ESA-funded research project entitled SInCohMap “Sentinel-1 Interferometric Coherence for Vegetation and Mapping” (sincohmap.org), undertaken from 2017 to 2020, a large number of tests and options were analyzed regarding the exploitation of the interferometric coherence derived from Sentinel-1 data in multitemporal land cover and vegetation mapping.
It was demonstrated that time series of coherence provide information complementary to backscatter intensity, hence contributing to improve classification both alone and in combination with intensity. Moreover, the coherence measured at VV channel contributes more than the coherence measured at the VH channel, contrarily to backscatter, so the usage of both polarimetric channels is recommended in classification. As a third key conclusion from that project, it was found that the shortest temporal baseline (i.e., 6 days) outperforms the rest of possible temporal baselines and, in addition, there is no significant improvement in the results when more temporal baselines are added to the 6-day one as input features.
All these conclusions were drawn from experiments carried out in three different test sites with diverse class sets and distributions: South Tyrol alpine environment (Italy), Doñana wetland and crops environment (Spain), and West Wielkopolska forest/agricultural/urban environment (Poland). Moreover, a specific study case on crop-type mapping was performed (Mestre-Quereda et al. 2020). Experiments included many different classification algorithms and strategies, as detailed in Jacob et al. (2020), which demonstrate the robustness of the project outcomes.
That project is currently being extended by exploring 3 new aspects related to the same topic:
A) Added value for classification of the combination of both ascending and descending acquisitions, since they offer different observation geometry of the same scene as well as different acquisition times (e.g., 6 am vs 6 pm in Europe).
B) Added value of the combination of Sentinel-1 coherence with Sentinel-2 optical imagery.
C) Potential usage of 6-day Sentinel-1 coherence for forest mapping and characterization in temperate and boreal regions.
Based on the results obtained with these experiments, recommendations will be presented for obtaining maximum performance in land cover and vegetation mapping by multi-track Sentinel-1 (ascending and descending) and by combination of Sentinel-1 and Sentinel-2 data.
References
A. Jacob, et al. “Sentinel-1 InSAR Coherence for Land Cover Mapping: A Comparison of Multiple Feature-Based Classifiers,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 13, pp. 535-552, January 2020.
A. Mestre-Quereda, et al. “Time Series of Sentinel-1 Interferometric Coherence and Backscatter for Crop-Type Mapping,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 13, pp. 4070-4084, July 2020.
Callisto platform provides a highly interoperable Big Data platform between DIAS infrastructures and Copernicus users, where the outcomes of optimized-on-HPC machine learning solutions on satellite data are semantically indexed and linked to crowdsourced, geo-referenced and distributed data sources, and served to humans in Mixed Reality environments, allowing virtual presence and situational awareness in any desired area of interest, augmented by Big Data analytics from state-of-the-art and scalable Deep Learning solutions. Callisto integrates Copernicus data, already indexed in a standard way on DIAS platforms such as ONDA-DIA and utilise HPC infrastructures for enhanced scalability. Complementary distributed data sources involve Galileo signals from mobile applications and from video recordings on Unmanned Aerial Vehicles (UAVs), Web and social media data and in situ data is also available at Callisto platform. On top of all these data sources, Artificial Intelligence (AI) technologies are applied to extract meaningful knowledge such as concepts, change detection, activities, events, 3D-models, videos and animations of the user community. AI methods are also executed at the edge, offering enhanced scalability and timely services.
In the frame of Callisto project, there are 4 use cases. In one of them (PUC 4) the European Union Satellite Centre (SatCen) is responsible for the development and implementation of a novel model framework for Land Border Change Detection. In this use case, will be defined an Area of Interest (AOI) with eight provisional segments as potential geographic zones for continuous monitoring (the AOI is not a limited 10x10 km image footprint, but rather the whole EU land border, where signals are continuously collected for exploring relevant change). PUC4 will introduced a cueing approach in Imagery Intelligence (IMINT) Copernicus services, allowing Activity-Based Intelligence (ABI) to operate at scale, discovering patterns (i.e. events) in several satellite imagery datasets using Machine Learning algorithms which will be able to recognise “relevant” and “non-relevant” land changes, based on proper user-centric definition. If the signals are considered critical, follow-up analysis can take place using VHR images provided by satellites (through DIAS infrastructure) or UAV, whose spatial resolution allows a more precise recognition of objects. Moreover, semantic technologies (Semantic Indexing, Geolocalisation in text, semantic search…) will be applied to those areas in order to extract meaningful information and provide an added value.
In terms of temporal analysis, the land changes can be observed during a multi temporal gap analysis (more than 1 year) or short term (less than 1 month) in order to identify relevant change detection patterns. The main results will be 3 different outputs, namely Product 1 (Rasterised relevant change detection probability layers at EU external borders, based on Sentinel-2 data. The layers can be updated, as new acquisitions are obtained at various locations and deliver a dynamic overview of the activity detected); Product 2 (Relevant land change detection alerts delivered to the user by various apps (e.g., email, WhatsApp message, etc.), as new “events” are detected. The user can adjust the sensitivity levels for the alerts according to the area) and Product 3 (Based on validated alerts, the system will generate and propose a flight plan for a future UAV mission, seamlessly integrating satellite and drone surveillance for improved awareness).
Forest species maps have great potential in the scope of the forest management, for enhancing forest inventory estimates and may be used as auxiliary information to construct new thematic maps or support the application of species-specific regression models. The use of remotely sensed data facilitates construction and updating of these maps at different scales. Despite the generalized use of these maps, the uncertainty associated with them and the consequences of that uncertainty are often ignored.
The goal of this study was to estimate the effects of the uncertainty of forest species maps. A forest species map representing the six main forest species (Fagus sylvatica, Pinus halepensis, Pinus nigra, Pinus sylvestris, Quercus pyrenaica/faginea, and Quercus ilex) of La Rioja (Spain) was constructed using random forests models, spectral data from Landsat and auxiliary information. To estimate the uncertainty of the map, bootstrapping techniques were implemented. Each new forest species map, (one for each bootstrap iteration), was compared with the original map to determine the population units for which the predicted forest species changed and those that retained the original classification.
The percentages of population units whose predicted forest species did not change over the 2000 bootstrap iterations were calculated and designated as the percentage of stable pixels or pixel stability. The standard errors (SE) for the area estimates were generally less than 10% with the exception of Pinus halepensis which reached a SE of 20%. Greater SE estimates were at least partially attributed to species with less frequent occurrence among the six main forest species analyzed and their more open distributions. The percentages of stable pixels were strongly positively correlated with the SE estimates. Among all species, more than 80% of the pixels were always classified as the same forest species, although for Pinus halepensis and Pinus nigra, just 67% and 79% of the pixels remained stable.
The results of this study demonstrated that the effects of uncertainty in the forest species map are not negligible, and ignoring the effects could jeopardize the reliability of the products derived.
Identifying recent surface dynamics in the Namib Desert (Namibia) using Sentinel-1, Sentinel-2 and Landsat time series
Tobias Ullmann(1), Eric Möller(1), Felix Henselowsky(2), Bruno Boemke(3), Janek Walk(3), Georg Stauch(3)
(1) Institute of Geography and Geology, University of Würzburg, Am Hubland, D-97074 Wuerzburg, Germany
(2) Institute of Geography, Johannes Gutenberg-University Mainz, Johann-Joachim-Becher-Weg 21, D-55099 Mainz, Germany
(3) Department of Geography, RWTH Aachen University, Templergraben 55, D-52056 Aachen, Germany
_________________
With about 40% of the world’s land area and more than 30% of the world’s population, drylands are identified as one of the most important environments of our planet. At the same time, deserts and desert margins react particularly strongly to climatic changes, while sensitivity and response times are largely unknown. Despite the fundamental absence of water in arid regions, rare but strong rainfall events act as important drivers for geomorphological activity, expressed in sediment erosion, transport, and deposition. However, such events and the induced surface dynamics are difficult to capture in space and time due to their inherent heterogeneity and the complexity of the involved process-response systems. In this context, arid environments are suitable targets for earth-observation-based research, as they are usually characterized by low anthropogenic disturbance and low cloud coverage. Therefore, satellite imagery provides information on the Earth’s surface and its landforms offering the unique opportunity to visualize, recognize and, potentially, quantify surface changes over time and for vast areas. The Sentinel Missions thereby mark a new epoch, as they allow to study processes in arid regions at high spatial and uniquely high temporal resolution. At the same time these missions allow to employ passive multispectral and Synthetic Aperture Radar (SAR) imagery synergistically, which opens new and promising perspectives for research, especially in the field of arid geomorphology.
This study presents first results on the characterization of recent surface dynamics in the Namib Desert via the joint analyses of time series of Sentinel-1/2, the Landsat archive and in situ records. Investigations focus on the Kaukausib Catchment (southern Namib), which is located at the transition between tropical and extratropical climate influences. In this region, extraordinary rainfall events can lead to morphodynamics of high magnitude, coupled with tremendous changes in vegetation cover for short periods. The occurrence and spatial dimension of such events were revealed by a Google-Earth-Engine-based analysis of the entire Landsat archive. Preliminary results indicate an event-frequency of around 6 to 11 years over the last 35 years and further point to changes in the periodicity over time, with a shift towards lower frequencies for the last decade. Focusing on the most recent event in 2018, time series of spectral indices of Sentinel-2, of SAR intensity and interferometric (InSAR) coherence of Sentinel-1 were analyzed to locate and map morphodynamic activity within the catchment at high spatial and temporal resolution. Preliminary results reveal a clear response of several remote sensing features to morphodynamic activity, e.g. a significant drop of InSAR coherence is found over active channels, which allows to identify active drainages and, eventually, to trace the connectivity of morphological provinces/units within the catchment.
These initial results highlight the added value of remote sensing products for identifying short- and medium-term surface processes. The latest generation of Earth observation products holds high potential for improving the understanding of geomorphological/geomorphic frequency-magnitude relationships in arid regions under global climate change.
The use of remote sensing data operating in different observation domains is an undeniable asset for the realization of quality land over products.
Indeed, satellites allow to cover large areas of interest in a regular way with a durable quality. As a consequence, research laboratories are now massively exploiting these data, which offer new possibilities, particularly by exploiting long time series.
Satellite data can be of different but often complementary natures, which makes it possible to broaden the possible fields of application (water management, snow cover, crop yield, urbanization, etc.).
In addition to these new data, there are recent technological developments (or old but now usable due to the evolution of computing capacities, such as the use of neural networks), and means of service provision and dissemination that allow these applications to be carried out over a longer period of time (long time series that are computed more rapidly) and in a larger space at different scales, sometimes simultaneously (stationary, local, national, continental, global scale).
iota2, developed by CESBIO and CNES with the support of CS GROUP, is a response to the growing demand for the creation of an Open Source tool, allowing the production of land cover maps at a national scale that is sufficiently generic to be adapted to the different objectives of users.
In addition, this project ensures the production of an annual land use map of metropolitan France [REF https://doi.org/10.3390/rs9010095], with a satisfactory level of quality, thus proving its operational capacities.
iota2 integrates several families of supervised algorithms used for the production of land use maps. Supervised algorithms (e.g., Random Forests or Support Vector Machine) that process pixels that can be parametrised by the users through a simple configuration file. iota2 also offers the user the option of using a deep learning model.
In addition to the pixel approaches, contextual approaches are also proposed, with Autocontext [1] and OBIA (Object Based Image Analysis). Autocontext, based on RF, takes into account the context of a pixel in a window around its position. The OBIA approach exploits an input segmentation to classify objects directly.
In addition to the supervised classification approaches, iota2 is also able to produce indicator maps (biophysical variables) either by supervised regression or by using user-provided processors, diversifying the possibilities of using iota2.
One major interest in iota2 is it's ability to deal with a huge amount a data, for instance the OSO product (https://theia.cnes.fr/atdistrib/rocket/#/collections/OSO/2327b748-a82c-5933-afb0-087bbfeff4cd) is generated using a stack of all available Sentinel-2 data over the France without any landscape discontinuity due to the Sentinel-2 grid. This ability is possible thanks the use of OTB, which is a high performance library dedicated to remote sensing algorithm developed by the CNES (French national space agency) and CS GROUP. Another point of interest is its capability to produce a land cover map everywhere a Sentinel-2 data and a ground truth are available (ie : https://agritrop.cirad.fr/597991/1/Rapport_Intercomparaison_iota2Moringa.pdf).
1. Derksen, D., Inglada, J., & Michel, J. (2020). Geometry aware evaluation of handcrafted superpixel-based features and convolutional neural networks for land cover mapping using satellite imagery. Remote Sensing, 12(3), 513. http://dx.doi.org/10.3390/rs12030513
The advent of openly available Landsat and Sentinel data has democratized the field of land cover classification, as is evidenced by the rapidly growing corpus of accurate high-resolution land cover products. We present our contribution to this field: A complete framework that explores the boundaries of what is possible with open data and open source software, aspiring to generate land cover predictions that are as useful as possible to as many users as possible. We do this by classifying land use/land cover with a large legend (43 classes) on a large time-series (20 consecutive years). The framework consists of 1) a analysis-ready spatiotemporal data cube of Europe, spanning 20 years at 30m resolution; 2) over 8 million harmonized land cover training points derived from LUCAS and CORINE land cover data; 3) A spatiotemporal ensemble machine learning workflow mapping 43 land cover/land use classes for every year between 2000 and 2020 at 30m resolution. The workflow includes hyperparameter optimization, spatial 5-fold cross-validation, and validation on an independent stratified sample-derived dataset. Our model outputs probabilities and uncertainty per class, all of which are openly available for customized use-cases. We showcase how these probabilities can be translated into trends that show gradual long-term land cover/land use dynamics instead of only relying on hard class maps. The entire workflow is implemented by the new open source eumap python package available at gitlab.com/geoharmonizer_inea/eumap, while all land cover probabilities, classifications, and auxiliary data are available through the Open Data Science Europe Viewer at maps.opendatascience.eu.
Our strict accuracy assessment indicates that classifying 43 mixed land use/land cover classes remains a difficult task, which is illustrated by the much higher performance when the predicted classes are aggregated to a legend that is more optimized for remote sensing-based classification tasks. In the talk, we will describe how we preprocessed the 200+ covariates, how we created our training dataset, and the design of our machine learning workflow. Lastly, we will discuss the shortcomings, possible solutions, and future ambitions for this evolving project.
Validation is an integral part of any Earth Observation (EO) based map production chain and is the primary information available to the map users providing details about the quality and applicability of the map to their specific application. Today, the Copernicus Sentinels, as well as other global monitoring platforms, are used to measure different variables and develop a wide range of indicators. From Essential Climate Variables (ECV) to Sustainable Development Goals (SDG) to Key Landscapes for Conservation (KLC), everyone now has access to free and open EO imagery as well as tools to create maps at the different scales and focusing on their own areas of interest and domains of interest. This map democracy comes at a price because as EO practitioners know, not all maps are created equally well. Furthermore, the development of novel techniques including the Machine Learning (ML) and Artificial Intelligence (AI) paradigms make it not only easy to produce high quality maps but also produce fakes. Such black-box processing techniques may not always allow map users to understand how the products they are using were produced which is not necessary per se. However, it is imperative that the quality of the information made available through the map product is well documented and properly quantified.
This has led to the development of a validation framework (https://doi.org/10.1080/22797254.2021.1978001) based on the Copernicus High-Resolution Hot Spot Monitoring activity (C-HSM) that is delivering global datasets of Key Landscapes for Conservation (KLC) for specific sites which are characterized by pronounced anthropogenic pressures that require high mapping accuracy. Furthermore, the importance of evaluating, assessing and quantifying changes in the land cover is one of the most important functions of EO based map making and one of the main drivers behind sustainable development policies. Validation and map quality are fundamental in understanding and quantifying the level of changes across different types of landscapes around the world due to anthropogenic pressures. Measuring the degradation/restauration of landscapes is based on variations in both time and space and therefore adds another level of complexity to the issue of trust in the maps we use.
The quality assurance and assessment framework was developed to support EO based maps destined for policy and decision making. The reality is that not all maps can undergo such rigorous validation and accuracy assessments. This talk will explore and discuss the current state-of-the-art in quantitative accuracy assessments in the context of digital map production together with insights into the needs by both human and machine map producers and users, taking into account the goals for which the map will be applied.
Validation is an integral part of any Earth Observation (EO) based map production chain and is the primary information available to the map users providing details about the quality and applicability of the map to their specific application. Today, the Copernicus Sentinels, as well as other global monitoring platforms, are used to measure different variables and develop a wide range of indicators. From Essential Climate Variables (ECV) to Sustainable Development Goals (SDG) to Key Landscapes for Conservation (KLC), everyone now has access to free and open EO imagery as well as tools to create maps at the different scales and focusing on their own areas of interest and domains of interest. This map democracy comes at a price because as EO practitioners know, not all maps are created equally well. Furthermore, the development of novel techniques including the Machine Learning (ML) and Artificial Intelligence (AI) paradigms make it not only easy to produce high quality maps but also produce fakes. Such black-box processing techniques may not always allow map users to understand how the products they are using were produced which is not necessary per se. However, it is imperative that the quality of the information made available through the map product is well documented and properly quantified.
This has led to the development of a validation framework (https://doi.org/10.1080/22797254.2021.1978001) based on the Copernicus High-Resolution Hot Spot Monitoring activity (C-HSM) that is delivering global datasets of Key Landscapes for Conservation (KLC) for specific sites which are characterized by pronounced anthropogenic pressures that require high mapping accuracy. Furthermore, the importance of evaluating, assessing and quantifying changes in the land cover is one of the most important functions of EO based map making and one of the main drivers behind sustainable development policies. Validation and map quality are fundamental in understanding and quantifying the level of changes across different types of landscapes around the world due to anthropogenic pressures. Measuring the degradation/restauration of landscapes is based on variations in both time and space and therefore adds another level of complexity to the issue of trust in the maps we use.
The quality assurance and assessment framework was developed to support EO based maps destined for policy and decision making. The reality is that not all maps can undergo such rigorous validation and accuracy assessments. This talk will explore and discuss the current state-of-the-art in quantitative accuracy assessments in the context of digital map production together with insights into the needs by both human and machine map producers and users, taking into account the goals for which the map will be applied.
In the context of climate change, land cover maps are important for many scientific and societal applications.
Nowadays, an increasing number of satellite missions generate huge amount of free and open data. For instance,
the Copernicus Earth Observation program with the Sentinel-2 mission provides satellite image time series
(SITS) at high resolution (up to 10m) with a high revisit frequency (every 5 days). Sentinel-2 sensors acquire
13 spectral bands ranging from the visible to the shortwave infrared (SWIR) wavelengths. At the scale of a
country like Metropolitan France, one year of acquisitions corresponds to around 15 TB of data. These SITS
which combine high temporal, spectral and spatial resolutions provide relevant information about vegetation
dynamics. By using machine learning algorithms, SITS allow the automatic production of land cover maps
over large areas [1]. Although the state-of-the-art Random Forests (RF) classifier provides good classification
accuracy, it does not take into account the spatio-spectro-temporal structure of the data, e.g., modifying the
order of the temporal acquisitions would not change the prediction of the RF.
Gaussian processes (GP) are Bayesian kernel methods which allow to encode the prior knowledge of the data
structure using a kernel function [2]. While GP are widely used in geospatial data analysis, they have seldom
been used in SITS analysis. Recent studies demonstrated the effectiveness of GP regression for vegetation
parameter retrieval on limited areas [3], [4]. Indeed, their original formulation scaled poorly with the data size
(e.g. GP involves operations that scale cubicly with the number of training samples) and thus alleviates their
use on larger areas [5].
In the last decades, sparse and variational techniques have been successfully proposed in computer vision to
leverage these computational issues [6], [7]. By introducing a small set of inducing variables, sparse methods
allow to approximate the model and thus reduce the complexity. Futhermore, variational methods use a variational
lower bound in order to optimize the locations of the inducing points. These methods combined with stochastic
gradient descent allow GP to scale to very large data sets, both for regression and classification [8].
In this work, we will evaluate the performance of variational sparse GP for SITS classification at country scale
as compared to RF. The investigated GP model, proposed by [8], is based on a multi-output regression where
GP are linearly combined and transformed by the softmax observation model to probability class memberships.
Stochastic variational methods are used to learn all the model parameters. We used a sum of kernel functions
to account for the spatial features in addition to the spectro-temporal features. The optimal weights between
spatial and spectro-temporal features are found automatically during the learning process.
To assess the performances between GP and RF, experiments have been conducted on 27 Sentinel-2 tiles on
the South of the France. All available acquisitions for 2018 have been linearly resampled onto a common set
of virtual dates with an interval of 10 days [1]. For each pixel, 10 spectral-bands with 10m ground sampling
distance and 3 spectral indices (NDVI, NDWI, brightness) were used. The volume of data corresponds to around
5 TB. The reference data is composed of 23 land cover classes divided in 8 ecoclimatic regions as described
in [1]. For each ecoclimatic region, we splitted the data in 3 different datasets: training, validation and testing.
The number of samples per class was balanced (4 000 samples per class for the training set, 1 000 for the
validation and 10 000 for the testing). We repeated the procedure 11 times with different datasets to make sure
performance results were correctly evaluated. Classical classification metrics such as overall accuracy, kappa
and fscore were used.
First, we compared the performance of training an independent model for each ecoclimatic region (stratifica-
tion) against training a unique model on the full area. Then, we assessed the effect of taking into account the
spatial information in the classification model. For RF, longitude and latitude were used as additional features.
Concerning GP, they used a sum of kernel functions as described above.
In all configurations, the overall accuracy of the GP model was 2 points above the RF model (i.e. 0.78 vs
0.76). The stratification (training independent models) gave better performance results than training an unique
model (overall accuracy is 1 point above for both GP and RF). Finally, adding spatial information increased
the overall accuracy of 1 point for the RF and around 2 points for the GP. The results showed that Gaussian
Processes can take better account of the spatial correlation. Results with larger datasets will be presented at the
conference.
REFERENCES
[1] J. Inglada, A. Vincent, M. Arias, B. Tardy, D. Morin, and I. Rodes, “Operational High Resolution Land Cover Map Production at the
Country Scale Using Satellite Image Time Series,” Remote Sensing, vol. 9, p. 95, Jan. 2017. Number: 1 Publisher: Multidisciplinary Digital Publishing Institute.
[2] C. E. Rasmussen and C. K. I. Williams, Gaussian processes for machine learning. Adaptive computation and machine learning, Cambridge, Mass: MIT Press, 2006. OCLC: ocm61285753.
[3] J. Estévez, J. Vicent, J. P. Rivera-Caicedo, P. Morcillo-Pallarés, F. Vuolo, N. Sabater, G. Camps-Valls, J. Moreno, and J. Verrelst, “Gaussian processes retrieval of LAI from Sentinel-2 top-of-atmosphere radiance data,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 167, pp. 289–304, Sept. 2020.
[4] J. Verrelst, “Machine learning regression algorithms for biophysical parameter retrieval: Opportunities for Sentinel-2 and -3 | Elsevier Enhanced Reader.”
[5] G. Camps-Valls, J. Verrelst, J. Munoz-Mari, V. Laparra, F. Mateo-Jimenez, and J. Gomez-Dans, “A Survey on Gaussian Processes for Earth Observation Data Analysis: A Comprehensive Investigation,” IEEE Geoscience and Remote Sensing Magazine, vol. 4, no. 2, pp. 58–78, 2016.
[6] M. Titsias, “Variational learning of inducing variables in sparse gaussian processes,” in Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics (D. van Dyk and M. Welling, eds.), vol. 5 of Proceedings of Machine Learning Research, (Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA), pp. 567–574, PMLR, 16–18 Apr 2009.
[7] J. Hensman, N. Fusi, and N. D. Lawrence, “Gaussian processes for big data,” in UAI, AUAI Press, 2013.
[8] A. G. Wilson, Z. Hu, R. Salakhutdinov, and E. P. Xing, “Stochastic Variational Deep Kernel Learning,” arXiv:1611.00336 [cs, stat], Nov. 2016. arXiv: 1611.00336.
Exploiting rigorously the ESA Climate Change Initiative / Copernicus Climate Change Service annual map time series allows to provide the first spatially explicit evolution of the land use and land cover change for the whole planet based on daily observation over the last three decades delivering an estimate of the major anthropogenic land use changes at continental and global scale. Gross land cover change rate as observed annually in a wall-to-wall manner at 300m resolution questions the current land change assessment compiled from FAO national statistics compilation.
The ESA CCI LC annual map time series spans 29 years from 1992 to 2020 at a spatial resolution of 300 m for the entire planet. This unique long-term land cover time series was only possible by combining the global daily surface reflectance of five different observation systems while maintaining a good consistency over time. The latter was identified as the top requirement by the climate modelling community, and innovative methods were developed accordingly for surface reflectance pre-processing and time series exploitation.
Global consistent 7-day multispectral composites at 300 m and 1 km from 1992 to 2020 were generated from the L1 radiances of the complete archives of five types of sensors: NOAA - AVHRR instrument series providing 1-km Long-Term Data Record (LTDR) v4 HRPT and LAC (1992-1999), Vegetation instruments 1 and 2 aboard SPOT 4 and 5 (1999 - 2013), ENVISAT MERIS Level 1B 300 m full (FR) and 1-km reduced (RR) resolutions, the Project for On-Board Autonomy Vegetation (PROBA-V) (2014 - 2019), and SENTINEL-3 A and B Ocean and Land Colour Instrument (OLCI) (2020).
The reprocessing of the five full mission archives allows to calibrate and correct the multispectral radiance to surface reflectance according to the same standards, to geometrically align the entire time stack at pixel level, and to upgrade or replace the different existing cloud screening algorithms for the respective missions to meet the strict land cover change detection requirements with regards to residual atmospheric or cloud shadow artefacts. On the other hand, the spatio-temporal consistency of the annual land cover maps is built in the time series exploitation method by decoupling on one side the precise land cover mapping driven by spectro-temporal signatures, and on the other side, the detection of land use land cover change from one year to another. Such an approach is supported by comprehensive typology definition based on ISO standards. The ESA CCI land cover typology was defined using the ISO 19144 Land Cover Classification System (LCCS) developed by the United Nations (UN) Food and Agriculture Organization (FAO) to describe the 22 different land categories unambiguously and to be compatible with the IPCC land classes. These standards also allow converting the land use land cover classes into Plant Functional Types distributions required by climate models.
Mapping and monitoring of land cover plays an essential role in effective environmental management and protection, estimation of natural resources, urban planning and sustainable development. Increasing demand for accurate and repeatable information on land cover and land cover changes causes a rapid development of the advanced, machine learning algorithms dedicated to land cover mapping based on Earth Observation data. A free and open access to Sentinel-2 data, characterized with high temporal and spatial resolution, increase the potential of remote sensed data to monitor and map land surface dynamics with high frequency and accuracy. The most common classification approach is to classify all classes simultaneously (called as flat approach), which not always give the high accuracy results. Despite a considerable number of published approaches towards the land cover classification, there is still a challenge to clearly separate some of land cover classes, for example grasslands, arable land or wetlands. To address these challenges, we examined the hierarchical approach towards the land cover mapping. The aim of this study is: a) to compare the results of the flat and hierarchical classification, b) to examine if a hierarchical classification of Sentinel-2 data can improve the accuracy of land cover mapping and delineation of the complex classes, c) what are the advantages and disadvantages of both approaches, and d) to assess the stability of classification models. The study is conducted in the Lodz Province in central Poland. The land cover classification is performed based on a time series of Sentinel-2 imagery acquired in 2020, using pixel-based machine learning Random Forest (RF) algorithm. The set of national reference databases such as topographic database, Forest Data Bank (BDL) was used to prepare the training and verification sampling plots. The following nine land cover classes are mapped: sealed surfaces, woodland broadleaved, woodland coniferous, shrubland, permanent herbaceous (grassland and pastures), periodically herbaceous (arable land), mosses and wetland, non-vegetated surfaces and water bodies. The classification is carried out following two approaches: 1) all land cover classes are classified together (flat classification), and 2) applying hierarchical approach by dividing classes into groups and classifying them separately. The hierarchical approach, in the first phase, classifies the most stable land cover classes and then the most problematic classes. To assess the stability of the classification model, both classifications are performed iteratively. The obtained results confirmed that the hierarchical approach gave more accurate results than standard flat method. The median of overall accuracy (OA) for hierarchical classification was higher by 3-9 percentage points compared to the flat approach. The OA for the hierarchical classification achieves 93-99% and for the flat approach 90%. Furthermore, the visual comparison of the land cover maps derived following two approaches confirmed that the hierarchical looks closer to the reality. To assess the accuracy of the final land cover maps, the independent verification was conducted using the random sampling methods, the data were compared against the Sentinel-2 mosaics and national aerial orthophoto. The result of the independent verification confirmed the higher accuracy of the hierarchical approach compared to the flat approach. For example, the mosses class achieved 100% of user’s accuracy (UA) in hierarchical classification and 82% in flat classification, so accuracy was higher by 18 percentage points. The highest difference between the producer’s accuracy (PA) was observed over the sealed surfaces class, in hierarchical PA was equal to 92% and in the flat approach PA reached 74%. These land cover classification results confirmed the potential of the hierarchical classification of Sentinel-2 data for improving the accuracy of the land cover mapping.
The research leading to these results has received funding from the Norway Grants 2014-2021 via the National Center for Research and Development - project InCoNaDa “Enhancing the user uptake of Land Cover / Land Use information derived from the integration of Copernicus services and national databases”.
Urban managers need information for urban territorial planning and monitoring. Traditional methods are based on visual interpretation of aerial photographs or field surveys. These tasks are time-consuming. Urban changes have been studied for several decades with remote sensing images (Herold et al., 2002; Hussain et al., 2013). With the democratization of access to very high spatial resolution images, urban managers need to detect and monitor the construction state of buildings in order to update their database. For instance, the municipality of Strasbourg (e.g. EuroMetropole / EMS) needs to monitor the state of buildings currently upgraded or created (ca. 250 to 350 building permits per year). This information is summarized in a database called ‘Inventory of Located Building’ (ILB) updated by experts twice per year often by ground truth survey. In order to provide information to the urban managers, the image dataset should have a very high spatial resolution, a high temporal resolution (every six months or each year) and should be associated with elevation data to detect the beginning and the end of the urban changes.
The objective of this work is to analyze these changes over the period 2017-2020 based on tri-stereoscopic Pleiades image acquired each year during the summer period. The ILB database is completed by adding information on observed changes between two dates by categorizing these changes in three classes: (1) "destruction", (2) "construction" and (3) "ongoing construction".
A supervised classification algorithm (e.g. ImCLASS; Déprez et al., 2020) is used to classify the urban building evolution. ImClass is based on a Random Forest classification of a selection of features calculated from multispectral images and from indices derived from tri-stereo Pléiades Digital Elevation Models (DSMs). The DSM-OPT webservice of the FoM@Tater and THEIA data centre is parameterized to optimize the results for urban environments. Digital Height Model (DHM) are calculated by using the NASA's Ames Stereo Pipeline software. The DHMs are validated by comparison to height derived from an airbone LiDAR survey acquired by EMS; results show a median relative difference value of less than 2m (1,70m) whatever the building heights. ImCLASS allows analysing the impacts of DHM in the classification results. The results shows that the number of false construction sites classified increases in much larger proportions than the number of correct construction sites. However, performance results show that the addition of height attribute in the classification process increase the number of correctly detected construction sites.
We examined the current status and dynamics of the vegetation in the heavily polluted Norilsk industrial region since 1985. Change detection was performed in Google Earth Engine with maximum summer NDVIs from cloud- and snow masked imagery of Landsat 5,7,8 satellites. Statistical tests were carried out on this series of data, including simple linear regression and analysis of Mann-Kendall trends; so, the analysis of changes is based on NDVI trends. To better account for the changes in tree and shrub cover, a similar analysis of NDMI was carried out. Analysis of the spatial structure of the trend showed that the maximum stable growth of both indices is observed southeast of Norilsk, in the Rybnaya River valley, the area most affected by pollution in the past. Validation via modern very high-resolution images confirms the appearance of grass and shrubs in the areas of the strong positive trend. A similar study based on MODIS / Terra-Aqua data for 2000-2020 confirmed the existence of significant NDVI trend in the Rybnaya river valley. Analysis of changes in vegetation based on very high resolution images showed that the greatest increase in NDVI is observed for vegetation in ravines and gullies. These vegetation classes were confirmed in the field in 2021 and are attributed to climate warming in recent decades.
A map of contemporary vegetation cover with 20 classes has been compiled on the basis of 2021 field data and Sentinel 2 MSI imagery of 2015-2020. Accuracy assessment confirmed moderate to good quality of the map depending on the particular class, with better recognition for less polluted areas. Comparison of the 2021 vegetation map with 1997 field descriptions has confirmed the trends for grass and shrubby vegetation growing in low terrain positions. The compiled map and field data can serve as a baseline for further vegetation monitoring in this highly changeable region.
The THEIA Land Data and Services Centre (www.theia-land.fr) is a consortium of 12 French public institutions involved in Earth observation and environmental sciences (CEA, CEREMA, CIRAD, CNES, IGN, INRAE, CNRS, IRD, Irstea, Météo France, AgroParisTech, and ONERA). THEIA has been initiated with the objective of increasing the use of space data by the scientific community and the public actors. The Scientific Expertise Centers (SEC) cluster research groups on various thematic domains. The "Urban” SEC gathers experts in multi-sensor urban remote sensing. Researchers of this group have structured their works around the development of algorithms useful for urban remote sensing using optical and SAR sensors to propose “urban products” at three different spatial scales: (1) the urban footprint, (2) the urban fabrics and (3) the urban objects. The objective of this poster is to present recent (>2019) advances of the URBAN SEC at these three scales. For the first two, the proposed methods are adapted to the geographic context of urban cities (West Cities, South Cities first and North Cities). For each spatial scale, the objective is to propose validated scientific products already available or in the near-term through the THEIA Land Service and Data Infrastructure.
At the macro-scale (urban footprint), an unsupervised automated approach is currently under development at Espace-DEV - Montpellier, and funded by a CNES project (TOSCA DELICIOSA). This method is derived from the FOTO algorithm originally developed to differentiate vegetation textures in HR and VHR satellite images (Couteron et al. 2006, Lang et al., 2019). It has been optimized and packaged into the FOTOTEX Python Open-Source library. The method is very well suited for areas with no or few urban settlement data or with quickly growing informal settlements. No training dataset is required, and the urban footprint can be identified from only one satellite image as long as it is not covered by clouds. For Western Cities where training datasets are available, the Urba-Opt processing chain based on an automatic and object-oriented approach has been deployed on HPC infrastructure and produce annually (since 2018) an urban settlement product which is available through the A2S dissemination infrastructure and on the Urban SEC of Theia land data and service Infrastructure. An ongoing research between LIVE and Espace Dev Labs focused on the interest to use the FOTOTEX result as training data in the Urba-Opt processing chain to propose an updated product of urban settlement for South cities.
At the scales of urban fabrics, products are under research activities The LIVE lab. In the context of an ongoing PhD thesis (ANR TIMES) and Tosca project (CNES 2019-2022) Sentinel-2 single-date images are used to assess two semantic segmentation networks (U-Net) that we combined using feature fusion between a from scratch network and a pre-trained network on ImageNet. Three spectral or textural indices have been added to the both networks in order to improve the classification results. The results showed a performance gain for the fusion methods. The research activities are ongoing in order to test the S1 imagery and temporal series for training in a deep architecture.
The IGN-LaSTIG - Univ. Paris Est has focused on the use of Sentinel-2 and VHR mono-temporal SPOT products to retrieve land cover information related to urban density. First, images undergo a U-net based semantic segmentation at urban object level to retrieve ‘topographic’ classes (buildings, roads, vegetation, …). Generalized information about urban fabrics is then derived out of these land cover maps thanks to another CNN architecture. Both a building density measure and a simplified Urban Atlas like land cover map are calculated. The UMR ESPACE has focused on the machine learning modeling of the evolution of urban territories of Arctic (Yakutsk) and North-Eastern Europe (Baltic States and Kaliningrad) cities since the post-Soviet period at two scales: those of the built-up area with high spatial resolution SPOT 6/7 images, and of the urban structures based on the use of Landsat 5 TM, Landsat 8 OLI, and Sentinel 2 MSI images. Environmental (urban vegetation), economic (agricultural transformation), and morphometric indexes have been developed to characterize the processes of urban restructuring (densification, renovation) and expansion of post-Soviet cities. A comparative analysis of the machine learning algorithms used was done on the South-East Baltic cities to evaluate their performance.
At the scale of urban object (3), a map of building with their functions is proposed by the TETIS laboratory. The study targets the retrieval of buildings footprint using deep convolutional neural networks for semantic segmentation, from Spot-6/7 images (1,5m spacing), on the entire France mainland. A single model has been trained and validated from 1.2k Spot-6/7 scenes and 20M images patches. The LIVE Lab has focused on the detection of urban changes from tri-stereoscopic Pléiades imagery through 2017 to 2020. A processing chain based on a Random Forest classifiers (ImCLASS) has been tested and the impact of the height attribute to detect changes has been evaluated to characterize changes into three thematic classes of changes.
Reaching land degradation neutrality (LDN) requires to maintain or enhance land-based natural capital through a pro-active focus on monitoring and planning. A key indicator for change in land-based natural capital (defined as a reasonable proxy by the UNCCD) is land cover (LC). Accurate global LC time-series are thus vital to monitor natural capital change. Although the number and quality of open-access, remotely sensed LC products is increasing, all products have uncertainties due to widespread classification errors. However, the relative magnitude of uncertainties among exiting LC products is largely unknown, which hampers their confident selection and robust use in integrated land-use planning. To close this gap, we quantified region-, time-period-, and coarse-LC class-specific data uncertainties for the 10 most widely used global LC time-series. To this end, we developed a novel multi-scale validation framework that accounts for differences in mapping resolutions and scale mismatches between the spatial extent of map grid cells and validation samples. We aimed for a fair validation assessment by carefully evaluating the quality of our validation samples with respect to landscape heterogeneity that LC products often fail to classify accurately. To address the issue, we supported the validation assessment with Landsat-based measures of cross-scale spectra similarity. The metric was computed by taking advantage of the full Landsat archive in Google Earth Engine. We base our assessment on more than 1.8 million globally integrated LC validation sites, where we mobilized around 2.8 million samples during the period 1980-2020 composed by hundreds of sampling effort of varied nature, from field surveys to crowdsourcing campaigns. Here, we will present the results of the assessment, providing insights on global and regional patterns of LC uncertainties. We found that no single product is more accurate over the others in mapping all LC classes, regions and time-periods. We will provide recommendations on the selection of fit-for-purpose LC time-series, and discuss future strategies for addressing their uncertainties in land-use planning.
Land use and land cover maps are a very important source of information in many natural resource applications to describe the spatial patterns and distribution of land cover, to delineate the extent of the area of various cover classes, as well as to perform temporal land cover change analysis and risk analysis. Information on land use and land cover and of its change over time and space is of key importance for example in policy decision making concerning environmentally or ecologically protected areas or native habitat mapping and restoration. Thematic maps of land use/cover are also linked to the monitoring desertification and land degradation, key environmental parameters pronounced in areas such as the Mediterranean basin.
Earth Observation (EO) data is an attractive solution towards obtaining thematic maps of land use land cover (LULC), due to their ability to provide inexpensively, repetitively, rapidly and even on inaccessible locations synoptic views of the land surface at a wide range of spatiotemporal scales. Nowadays, there has been a vast production of relevant operational products available characterized by a wide variety of spatial and temporal resolution, which manifests the high level of technological maturity of this technology in this domain. Yet, before such products are used in any kind of application or research investigation in any scientific field, is of major importance to evaluate the accuracy of those products.
One of these products is the European’s Space Agency (ESA) WorldCover 2021 distributed just recently. This operational product provides information on land cover on a global scale at a high spatial resolution, 10 meters, and classifies the types of land cover in 13 classes, based on the analysis of Sentinel-1 and Sentinel-2 EO datasets. To our knowledge, according to ESA’s WorldCover official site, which includes the validation report of the product, estimates the accuracy of it at almost 75%, the exact location of the areas that are included in the validation dataset are not available to the public.
The present study aims at assessing the accuracy of ESA’s WorldCover 2020 operational product for selected regions in Greece that represent a typical Mediterranean setting. Experimental sites were selected due to their cultural, economical and environmental significance in Greece, but also including as many as possible of the product’s classes. Assessment of the products’ accuracy was carried out by computing a series of statistical metrics using as reference selected locations of known land cover obtained from field-visits in the areas, drone imagery and very high resolution imagery from PlanetScope and Google Earth. Validation was developed and implemented in R programming language allowing a robust and reproducible implementation in open access software.
The results of this validation study highlight the consecutive need for assessing satellite-derived globe cover operational products, since they are powerful, low-cost and continuously upgraded tools, used for observation, change detection, policy decision making and overall land management. For this purpose, their comparison to high-resolution relevant operational products and other means for monitoring, detecting and mapping land cover could underline the importance of accurate, up-to-date products indicating the continuous upgrade for future researches. All in all, from an operational perspective, results of our study can be of particular importance particularly in the Mediterranean basin, since use of land cover products can be associated to the mapping and monitoring of land degradation and desertification phenomena which are frequently pronounced in such areas.
The Norwegian area type map of land surface resources (AR5) is a detailed and high precision map that classifies the land surface based on a set of criteria related to the land cover type and its current and/or potential uses. The AR5 map is used as a basis for various purposes, including legal issues. Therefore, it is critical that the map is kept up-to-date and precise. The updating process is costly and time consuming as it relies on manual interpretation of high-resolution aerial photographs. Manual interpretation can also overlook changes and may introduce subjective errors. Further, revisit time of aerial images in Norway is at best five years, which is not optimal for a map that would require continuous update. There is a need for improving the updating process by increasing the frequency of the updates and identifying a method of making the entire process more effective without reducing the precision of the dataset. The pan-European very high resolution (VHR) satellite images delivered at 2 m spatial resolution with high accuracy of orthorectification is a promising dataset that has spatial resolution close to the aerial images and has potentially much higher temporal frequency (annual product from 2022). At the same time, deep learning algorithms have shown superb performances in analysing high-resolution remotely sensed data. This study, therefore, explores the potential of using deep learning algorithms on the pan-European VHR image mosaics in dealing with the limitations of the AR5 updating process. The proposed application uses the VHR of 2018 level 3 with the spatial resolution of 2 m over the entire land area of Norway. The high spatial resolution of the AR5 requires that such images must be analysed at least at pixel resolution to keep up with the spatial detail of the map. Semantic segmentation, that is known to classify images at pixel level, is therefore the optimal approach to use in this context. Using the AR5 map as training dataset and as reference against which changes are detected, a deep learning algorithm based on the U_net model is implemented to achieve semantic segmentation of the images into the different classes of the AR5 map. Different approaches of training the U_net model, including partial transfer learning are explored. The regional diversity of Norway is considered, and the country is divided into regions of varying topography, latitude and climate (land cover/use). One model is then trained for each region. Test data are kept separate from the training data for robust evaluation of the models. The trained and evaluated models are finally used for predicting the area types. The segmentation results are then compared with the existing AR5 map to detect anomalies so that areas for potential update are detected.
Global land cover and land cover change maps derived from Earth Observation techniques are regularly released at multiple scales. Their endorsement by users depends in part on their quality. The Committee on Earth Observation Satellites (CEOS) of the Group for Earth Observation (GEO) plays a crucial role in coordinating the validation process [1], [2] and in ensuring that the suite of LC products is ultimately validated operationally and systematically by independent bodies.
These standards have been applied commonly to validate global land cover products [3]. The stratified random sampling [4]–[7] often used to validate the global LC products at moderate spatial resolution (250 - 1,000 m) [8]–[11] has been recognized as the most efficient sampling strategy [1], while more diverse samplings are used for validations of global LC maps at high resolution (10 - 30 m) [12]–[16]. LCC validation is still in its early stages. The exercise is still challenging because the rarity of a change event complexifies the omission rate estimation among large unchanged areas [1]. The availability of reference data decreases with time, and the poor correspondence between observation dates, i.e. validation versus detection, is a source of uncertainty. Therefore, stratified sampling is used in space to meaningfully represent areas with high rates of LCC and, in time to account for the availability of reference data. In benchmarking, as in round-robin activities, the additional objective is to highlight the performance of one product relative to others to select the best results or target improvements. Formal standards in the field of benchmarking have yet to be defined.
Building on the GlobCover experience, the CCI Medium Resolution Land Cover (MRLC) has developed a nested sampling scheme that is adaptable to multiple scales to validate land cover and land cover change. Based on the two-stage systematic stratified cluster sampling of the Joint Research Centre (JRC) TREES dataset designed on a lat/long geographic grid, 2600 pre-segmented primary sampling units (PSUs) were visually interpreted on near-2010 very high resolution (VHR) images by an international network of LC experts with regional expertise and a deep understanding of the land cover legend. Land cover changes were assessed between 2010, 2005 and 2000, and updated annually from 2015 to 2020 [17]. Spatio-temporal stratifications were made to the original sampling to address omissions of land cover change. The LC and LCC assessments applied here can be called validations, sensu stricto, as complete independence between the calibration and validation data is ensured and avoids bias [1]. Finally, the CCI MRLC project contributed to benchmarking by testing a sampling strategy increasing the discrimination between binary maps within the recognized LC assessment guidelines [18].
Here, we reflect on the lessons learned from the different types of the GlobCover and CCI MRLC validation experiments: the development of a scalable validation framework designed to annually assess the quality of LC and LCC map at 10, 30 and 300 m scales and the benchmark of LC prototypes including multiple LC classes. We discuss potential directions for the development of land cover and land cover change validation at various scales.
[1] A. H. Strahler et al., “2006_Strahler et al._Unknown.pdf,” no. 25, 2006.
[2] P. Olofsson, G. M. Foody, M. Herold, S. V. Stehman, C. E. Woodcock, and M. A. Wulder, “Good practices for estimating area and assessing accuracy of land change,” Remote Sens. Environ., vol. 148, pp. 42–57, 2014, doi: 10.1016/j.rse.2014.02.015.
[3] S. V Stehman and R. L. Czaplewski, “Design and Analysis for Thematic Map Accuracy Assessment - an application of satellite imagery,” Remote Sens. Environ., vol. 64, no. January, pp. 331–344, 1998, doi: 10.1016/S0034-4257(98)00010-8.
[4] J. Scepan, G. Menz, and M. C. Hansen, “The DISCover validation image interpretation process,” Photogramm. Eng. Remote Sens., vol. 65, no. 9, pp. 1075–1081, 1999.
[5] P. Mayaux et al., “Validation of the global land cover 2000 map,” IEEE Trans. Geosci. Remote Sens., vol. 44, no. 7, pp. 1728–1739, 2006.
[6] P. Defourny, P. Mayaux, M. Herold, and S. Bontemps, “Global land-cover map validation experiences: toward the characterization of quantitative uncertainty,” in Remote Sensing of Land Use and Land Cover: Principles and Applications, 2012.
[7] N. E. Tsendbazar et al., “Developing and applying a multi-purpose land cover validation dataset for Africa,” Remote Sens. Environ., vol. 219, no. March, pp. 298–309, 2018, doi: 10.1016/j.rse.2018.10.025.
[8] T. R. Loveland et al., “Development of a global land cover characteristics database and IGBP DISCover from 1 km AVHRR data,” Int. J. Remote Sens., vol. 21, no. 6–7, pp. 1303–1330, 2000.
[9] E. Bartholomé and A. S. Belward, “GLC2000: A new approach to global land cover mapping from earth observation data,” Int. J. Remote Sens., vol. 26, no. 9, pp. 1959–1977, 2005, doi: 10.1080/01431160412331291297.
[10] P. Defourny et al., “GlobCover: A 300M Global Land Cover Product for 2005 Using ENVISAT MERIS Time Series,” Proc. ISPRS Comm. VII Mid-Term Symp., no. May 2006, pp. 8–11, 2007.
[11] O. Arino, P. Bicheron, F. Achard, J. Latham, R. Witt, and J. L. Weber, “GlobCover: The most detailed portrait of Earth,” Eur. Sp. Agency Bull., vol. 2008, no. 136, pp. 24–31, 2008.
[12] Y. Zhao et al., “Towards a common validation sample set for global land-cover mapping,” Int. J. Remote Sens., vol. 35, no. 13, pp. 4795–4814, 2014.
[13] C. Li et al., “The first all-season sample set for mapping global land cover with Landsat-8 data,” Sci. Bull., vol. 62, no. 7, pp. 508–515, 2017, doi: 10.1016/j.scib.2017.03.011.
[14] P. Gong et al., “Finer resolution observation and monitoring of global land cover: First mapping results with Landsat TM and ETM+ data,” Int. J. Remote Sens., vol. 34, no. 7, pp. 2607–2654, 2013.
[15] P. Gong et al., “Stable classification with limited sample: transferring a 30-m resolution sample set collected in 2015 to mapping 10-m resolution global land cover in 2017,” Sci. Bull., vol. 64, no. 6, pp. 370–373, 2019, doi: 10.1016/j.scib.2019.03.002.
[16] J. J. Chen et al., “Global land cover mapping at 30m resolution: A POK-based operational approach,” ISPRS J. Photogramm. Remote Sens., vol. 103, pp. 7–27, 2015, doi: 10.1016/j.isprsjprs.2014.09.002.
[17] UCLouvain and ECMWF, “Copernicus Climate Change Service. ICDR Land Cover 2016 - 2019. Product Quality Assessment Report,” 2020.
[18] C. Lamarche et al., “Compilation and validation of sar and optical data products for a complete and global map of inland/ocean water tailored to the climate modeling community,” Remote Sens., vol. 9, no. 1, 2017, doi: 10.3390/rs9010036.
Land use, land-use change, and forestry (LULUCF) is a greenhouse gas inventory sector that evaluates greenhouse gases changes in the atmosphere from land use and land use changes. It is key information for major reports of the Intergovernmental Panel on Climate Change (IPCC). Information about LULUCF is reported annually to the IPCC by each reporting state, and each reporting state uses available sources of information about land use. Hence, different methodologies with different data are used. LULUCF data from Czechia are reported from cadastral data, its abilities to detect land use changes are limited (Štych et al. 2020, Pazúr et al. 2017).
This study focuses on reporting LULUCF information from Earth observation data. The main goal is to classify Sentinel-2 multispectral data for purpose of LULUCF using the Google Earth Engine. The categories used in LULUCF are: Settlements, Cropland, Forestland, Wetlands and Other land. We classified 2 NUTS2 regions: Southeast (CZ06) and Central Moravia (CZ07) in Czechia in 2018.
The first step was preparing a classified mosaic. The mosaic was made from images with masked cloudiness. For each pixel, the median of the cloud-less values was determined as the final value. This procedure was chosen for the values of all S-2 bands with a resolution at least 20 m. Than two more bands were added to the classified raster. The first was the variance of NDVI values in the period from May to October. This band helps to distinguish surfaces such as buildings (small variance) from surfaces with dynamically changing NDVI (arable land). The second band was elevation dataset of SRTM.
The Random forest method was used for classification. Training polygons were created by two methods. The first method is the semi-automatic creation of training polygons from the CORINE Land Cover vector layer for the year 2018. From the CLC2018 polygons, core areas were created first using the inner buffer of 100 m. Inside these areas, training polygons of a circle with a diameter of 80 m were randomly generated. This method of creating training polygons did not include all kinds of classes, e.g. Other land (photovoltaic power plant). These training polygons for these kind of surfaces were manually added. From point of view accuracy of classification, a combination of these parameters was tested: the Number of Trees (NT) ranged 50 - 400, the Variables per Split (VPS) ranged 1 – 6, and the Bag Fraction (BF) ranged 0.1 – 0.5. Totally 450 combinations of different parameters were tested. For each combinations Cohen's kappa was calculated by control data. The classification with the highest accuracy with an overall accuracy of 89.1% (Cohen's kappa is 0.84) had the combination: NT = 150, VPS = 3, BF = 0.1. The most dominant LULUCF class in the study area in 2018 was Cropland with 42.78 % of the overall area. Forestland covered more than a third (35.4 %), Grasslands15.39 %, Settlements had 4,66 % and Other land and Wetlands less than 1 % (0.96 % for Other land and 0.80 % for wetlands).
From point of LULUCF view, the combination of Sentinel-2 data with cloud-based computing (Google Earth Engine) seems to be very perspective and acceptable for stakeholders.
Title : Towards operational surface water extent estimation from C-band sentinel-1 SAR imagery.
Author : Jungkyo Jung, Heresh Fattahi, Gustavo HX Shiroma
Jet Propulsion Laboratory. California Institute of Technology, Pasadena, CA,USA
The extent and the location of surface water are very essential information related to human and climate activity. The global maps delineating surface water extent have been mainly produced from the optical imagery due to its high accuracy and robustness. With the rich archive of the optical images accumulated over the last 30 years, the long-term changes of the permanent water may be fully understood. However, the ability of the optical sensors to monitor the temporal fluctuations of water extent is limited by cloud coverage and sunlight. On the other hand, the active sensors utilizing the microwave signal, such as the Sentinel-1 C-band Synthetic Aperture Radar (SAR), potentially can monitor the dynamics of surface water extent regardless of the weather conditions.
In general, at the radar frequency and at the range of incidence angle of the Sentinel-1A/B’s radar, the specular reflection of the microwave signal over open water leads to darker appearance of water than land in the SAR images.
The contrast between water and land has led to an surface water extent estimation by thresholding the SAR backscatter images Chini et al. [2019]. However, in practice the simple assumption of darker water than land may be violated in different situations such as when the wind-driven backscatter results in bright water or when the flat land surfaces (e.g., arid desert regions) reflects most of the microwave signal away from the radar line of sight and therefore leading to dark land. Bright water and dark land result in under and over estimation of surface water extent respectively.
In order to overcome these limitations, several studies have tried to use ancillary data or image processing algorithms to improve water estimation from SAR data. Twele et al. [2016] proposed a thresholding-based algorithm for flood-mapping from the Sentinel-1 data. Their algorithm starts with radiometric terrain corrected (RTC) backscatter images, estimates a global threshold for the entire scene and improves the estimation using fuzzy logic-based classification refinement, height above the nearest drainage (HAND) index and region-growing. Despite significant improvements compared to the simple thresholding algorithms, the results from Twele algorithm still suffers from bright water and dark land in many regions of the world.
We build on Twele algorithm and present a new algorithm in which we further improve the surface water extent estimation by modifying the tile detection approach for threshold estimation, modifying the fuzzy logic classification with additional rules introducing additional ancillary layer such as land cover maps and existing permanent water masks and by developing a new algorithm to refine the estimations in a second iteration based on the bimodality test.
We evaluate the performance and thematic accuracy of the automatic processing chain for various sites covering surface water worldwide. We define the reference water from water occurrence maps produced by Pekel et al. [2016] quantifying changes in global surface water over the past 32 years at 30-metre resolution from Landsat imagery. The preliminary result of the verification suggests that the surface water detection processor is able to achieve satisfying classification results with user accuracies between 82.0% and 99.1% and the producer accuracy from 93.9 % to 99.7% over areas with stable water extent close to permanent water. In order to further evaluate the estimation accuracy over regions with more dynamic water extent we use independent estimates from Harmonized Landsat-8 and Sentinel-2 data as well as high resolution optical imagery.
[reference]
Chini, M., Pelich, R., Pulvirenti, L., Pierdicca, N., Hostache, R., & Matgen, P. (2019). Sentinel-1 InSAR coherence to detect floodwater in urban areas: Houston and Hurricane Harvey as a test case. Remote Sensing, 11(2), 107.
Twele, A., Cao, W., Plank, S., & Martinis, S. (2016). Sentinel-1-based flood mapping: a fully automated processing chain. International Journal of Remote Sensing, 37(13), 2990-3004.
Pekel, J. F., Cottam, A., Gorelick, N., & Belward, A. S. (2016). High-resolution mapping of global surface water and its long-term changes. Nature, 540(7633), 418-422.
In our research, we fulfilled a complete land cover classification process based on Sentinel-2 images. The project area of interest was located in a cross-border territory of Hungary, Slovakia, Romania, and Ukraine. The project was completed in the frames of a cross-border cooperation program, the HUSKROUA project.
Our aim was to identify main land cover classes in the area, which was challenging due to the following factors: 1.) the large area of the cross-border area of interest which covered several Sentinel-2 tiles altogether; 2.) the local differences in phases of phenology and reflectance of specific land cover types; 3.) the frequent cloud cover conditions over the mountainous regions of the area of interest. Due to point 1.), the area of interest was divided into four parts and the land cover classification and accuracy assessment processes were performed in these four parts separately. The whole area of interest covered an area of 50110 square kilometers. We selected cloud-free or mostly cloud-free images from between March and October 2021 and created image mosaics of the selected tiles. The best cloud-free or mostly cloud-free images were selected from May and September 2021.
During the analysis, we segmented the images mainly based on the 10-m bands and an edge detection layer generated from the bands. The classification was performed mainly based on visible bands, NIR, SWIR, and two spectral indices generated from the bands: Normalized Difference Vegetation Index (NDVI) and Modified Normalized Difference Water Index (MNDWI). The following classes were used and successfully identified in the area: Built-up, Agriculture, Grass, Forest, Single group of trees and Other.
The differences in phenology and reflectance turned out to be a limitation regarding local variation but were useful regarding different dates. The accuracy assessment of the classified images was performed by a QGIS plugin developed for this special purpose. The overall accuracy of the classification in 4 parts of the area were between 90 and 92%. In some test areas we also used LIDAR data and RGB orthoimages, in which cases we achieved up to an overall accuracy of 96%. The classification was capable of identifying the main land cover types in the area successfully.
This research was supported by the project titled as Complex flood-control strategy on the Upper-Tisza catchment area - DIKEINSPECT”, with a project number HUSKROUA/1901/8.1/0088.
This abstract aims to highlight how a private company developed and implemented a lightweight, robust, and flexible process to automate the generation land cover maps by fusing multiple data sources, enabling a public administration to reliably and frequently update its urban-rural landscape representation.
To deal with the regional environmental, climatic, and territorial management challenges, authorities effectively need precise and regularly updated representation of the fast-changing urban-rural landscape. In 2018, the WALOUS project was launched by the Public Service of Wallonia (SPW), Belgium, to develop reproducible methodologies for mapping Land Cover (LC) and Land Use (LU) (Beaumont et al. 2021) on the Walloon region. The first edition of this project was led by a consortium of universities and research center and lasted 3 years. In 2020, the resulting LC and LU maps for 2018 (Bassine et al. 2020) updated the outdated 2007 map (Baltus et al 2007) and allowed the regional authorities to meet the requirements of the European INSPIRE Directive. However, although end-users suggested that regional authorities should be able to update these maps on a yearly basis according to the aerial imagery acquisition strategy (Beaumont et al. 2019), the Walloon administration quickly realized that it does not have the resources to understand and reproduce the method because of its complexity and relatively concise handover. A new edition of the WALOUS project started in 2021 to bridge those gaps.
AEROSPACELAB, a private Belgian company, was selected for WALOUS’s 2nd edition thanks to its promise to simplify and automate the LC map generation process while ensuring a deep appropriation of the solution by the local authorities. This approach would allow the SPW to reliably and frequently update the LC map of Wallonia. This approach entails two crucial parts: a robust and automated model, and a deep involvement of the regional administration.
For the solution, an approach revolving around a Deep Learning (DL) segmentation model was chosen. Compared to traditional approaches, DL models do not require as much features engineering. This played favorably in the adoption of the solution by the local authorities. The segmentation model is based on the DEEPLAB V3+ architecture (Chen et al. 2017) (Chen et al. 2018) and was implemented with the open-source DETECTRON2 framework (Wu et al. 2019) which allows for rapid prototyping. DEEPLAB V3+ main distinguishing features are its use of atrous convolutions and atrous spatial pyramid pooling which address the problem of segmenting objects at multiple scales without being too costly at inference time. This is all permitted thanks to the atrous convolutions which widen the fields-of-views without increasing the kernel’s dimensions. Slight technical adjustments have been made to this architecture to tailor it to the task: on the one hand, the segmentation head was adjusted to comply with the 11 classes representing the different ground covers, on the other hand, the input layer was altered to cope with the 5 data sources.
Data fusion was a key aspect of this solution as the model was trained on various sources with different spatial resolutions:
• high-resolution aerial imagery with 4 spectral bands (Red, Blue, Green, and Near-Infrared) and a ground sample distance of 0.25m;
• digital terrain model obtained via LiDAR technology; and
• digital surface model derived from the aforementioned high-resolution aerial imagery by photogrammetry.
The pre-trained model was initially trained using WALOUS’s previous edition LC map (artificially augmented), and then a fine-tuning phase was performed on a set of highly detailed and accurate LC tiles that were manually labelled.
Several additional data sources and model architectures were considered and prototyped, such as the POINTREND extension (Kirillov et al. 2020), and a ConvLSTM to segment satellite imagery with high temporal resolution such as Sentinel-2 (Rußwurm et al. 2018) (Belgiu et al. 2018). All had for requirements to segment Wallonia in 11 classes ranging from natural – grass cover, agricultural parcel, softwood, hardwood, and water – to artificial – artificial cover, artificial construction, and railway – covers. The final model achieves an overall accuracy of 92.29% on the test set consisting of 1710 points photo-interpreted. Figure 1 shows the high-level overview of the solution’s architecture, and Figure 2 gives an overview of the various predictions made by the model at a spatial resolution of 0.25m/pixel. Besides updating the LC map, the solution also compares the new predictions with the previous LC map and derives a change map highlighting, for each pixel, the LC transitions that may have arisen during the two studied years.
Regarding the appropriation of the solution by the SPW, AEROSPACELAB managed to involve the local authorities into the development of the solution thanks to an agile approach and an iterative process. Indeed, each new iteration of the solution was presented to the end-users of the administration, and, through a feedback loop, their remarks and suggestions were taken into account to prototype the next iteration of the solution. While being time consuming, this approach ensures to the local administration a better understanding of the challenges related to the task, as well as a better appropriation of the implemented solution. Furthermore, those monthly presentations served as a pulse check on the project’s status for the local authorities who otherwise would have remained blindfolded as it is often the case in such contracts.
A common hindrance to appropriation is the use of licensed software. Hence, the decision was made to only use open-source software. When multiple options were available, the one with the largest community of users was selected as a high adoption often leads to better support.
Furthermore, as part of the handover, several interactive workshops have been organized. These illustrated to the regional authorities how to use the model and interpret its results so that can, independently, update the next LC maps of Wallonia as soon as new data are made available. This ability to generate a new version of the LC map as soon as the data is available is crucial as the period between the data input and the diffusion date of the map often determines its popularity and use by the end-users.
In conclusion, the public-private partnership led to publication of the new LC maps for 2019 and 2020 in early 2022. But moreover, the public administration will be trained to be able to make use of the AI algorithm with each new annual aerial images.
-----------------------------------------
references:
Baltus, C.; Lejeune, P.; and Feltz, C., Mise en œuvre du projet de cartographie numérique de l’Occupation du Sol en Wallonie (PCNOSW), Faculté Universitaire des Sciences Agronomiques de Gembloux, 2007, unpublished
Beaumont, B.; Stephenne, N.; Wyard, C.; and Hallot, E.; Users’ Consultation Process in Building a Land Cover and Land Use Database for the Official Walloon Georeferential. 2019 Joint Urban Remote Sensing Event (JURSE), Vannes, France, 1–4. doi:10.1109/JURSE.2019.8808943
Beaumont, B.; Grippa, T.; Lennert, M.; Radoux, J.; Bassine, C.; Defourny, P.; Wolff, E., An Open Source Mapping Scheme For Developing Wallonia's INSPIRE Compliant Land Cover And Land Use Datasets. 2021.
Bassine, C.; Radoux, J.; Beaumont, B.; Grippa, T.; Lennert, M.; Champagne, C.; De Vroey, M.; Martinet, A.; Bouchez, O.; Deffense, N.; Hallot, E.; Wolff, E.; Defourny, P. First 1-M Resolution Land Cover Map Labeling the Overlap in the 3rd Dimension: The 2018 Map for Wallonia. Data 2020, 5, 117. https://doi.org/10.3390/data5040117
Chen, L.-C., Papandreou, G.; Schroff, F.; Adam, H., Rethinking Atrous Convolution for Semantic Image Segmentation. Cornell Univeristy / Computer Vision and Pattern Recognition. December 5, 2017.
Chen, L.-C., Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H., Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. ECCV. 2018
Wu, Y.; Kirillov, A.; Massa, F.; Lo, W.-Y.; Girshick, R., Detectron2. https://github.com/facebookresearch/detectron2. 2019.
Kirillov, A.; Wu, Y.; He, K.; Girshick, R., PointRend: Image Segmentation as Rendering. February 16, 2020.
Rußwurm, M.; Korner, M., Multi-Temporal Land Cover Classification with Sequential Recurrent Encoders. International Journal of Geo-Information. March 21, 2018.
Belgiu, M.; Csillik, O., Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis. Remote Sensing of Environment. 2018, pp. 509-523.
Substantial land cover and land use change (LCLUC) occurred in Central Europe after the Autumn of Nations in 1989 and the expansion of the European Union (EU) in 2004 and 2007. Currently, a few studies exist reporting these changes at a regional scale (Griffiths et al., 2014; Munteanu et al., 2015). However, in order to fully understand the drivers and environmental implications of these land conversions, more spatially detailed information on land cover and land cover change trajectories is needed.
The solution for this need can be seen in the remote sensing methods for mapping land cover and land cover change at regional scales. Open access to increasing amounts of medium resolution satellite imagery from systems like Landsat and the emergence of high-performance cloud computing infrastructures like Google Earth Engine allows for advancing the mapping methodology tremendously.
In this study, we aim to address this need for information by developing an approach for generating a multi-year record of land cover and land cover change at regional scales. We showcase our tool by generating a set of temporally consistent annual maps of land cover for Central Europe covering a 35-year period from 1985 to 2020. Moreover, made effort to identify the spatial patterns of land cover change and its change.
Our study area spread across four countries in the Central Europe: Czechia, Hungary, Poland and Slovakia that joined the European Union within the time span of our study. We focused on eight major land cover categories: artificial land, croplands, forest, shrublands, grassland, barren land, wetland and water and four land cover changes (1) from croplands to semi-natural vegetation (mostly land abandonment) (2) shrublands to semi-natural vegetation wooded (mostly land abandonment), (3) grasslands to semi-natural vegetation (mostly land abandonment), (4) croplands and vegetation classes to artificial land (mostly urban sprawl).
For mapping purposes, we used USGS Tier-1 Landsat surface reflectance products (product’s code here) available on the Google Earth Engine platform acquired between 1985 and 2020. We restricted our data set to the acquisitions from the average vegetation season starting from the 135th day of the year (1st of May) and ending at the day 288th (31st of October). We processed the imagery by screening out the clouds, cloud shadows and snow with the use of CFMASK. We also normalized the OLI reflectance to match the values from the TM and ETM+ sensors. In total, we used 20,310 images across 90 WRS-2 footprints. On average, this yielded six images per year per single footprint (minimum one and maximum 11). For each year of our time frame, we used the Landsat data to calculate 84 classification metrics including (1) the summary metrics for each spectral band: maximum, minimum, mean, median, standard deviation, and the 25th and 75th percent quantile; (2) six indices: Normalized Difference Vegetation Index (NDVI), Normalized Burn Ratio (NBR), Bare Soil Index (BSI), Brightness, Greenness, and Wetness from a Tasseled Cap Transformation.
We used all LUCAS datasets covering years: 2006, 2009, 2012, 2015 and 2018 as reference for both training and validation. We used all eight main categories of land cover differentiated in the LUCAS: artificial land, cropland, woodland, shrubland, grassland, bare land, water, and wetland, plus seventy-six sub-classes. We selected plots where the land cover proportion was equal to 100% and for which the field-observed GPS location was less than 30m away from the central point. In order to validate four specific types of land cover change (name them here), we randomly selected 80 pixels within these categories and 50 pixels within each stable land cover category. We visually interpreted these validation samples using Landsat imagery and a time series of spectral indices.
For mapping the land cover, we used a two-step approach to supervised classification implemented in the environment of Google Earth Engine. First, we used the Random Forest (RF) classifier to generate the variable importance for our input variables selection consisting of the 20 best metrics. We selected two methods of variable importance assessment: (1) Mean Decrease Accuracy (MDA) and (2) Mean Decrease Gini (MDG) statistics. Second, we used an ensemble approach and three non-parametric, machine learning algorithms Random Forest (RF), Support Vector Machines (SVM) and CART, to generate the annual maps. With each classifier, we generated a set of 35 annual maps. In total, we received 140 maps of land cover 140 maps depicting the classification probability.
We used our maps to create the time series of land cover information spanning between 1985-2020. We analysed such data for land cover change detection (stable areas) and found the impossible trajectories of change (i.e. changes with too high diversity; misclassification). In order to do that, we first generated a map of stable eight land cover categories and later focused on detecting changes in three classes: croplands, shrublands and grasslands. We specified four change processes: (1) croplands to seminatural vegetation (herbaceous and woody), (2) grassland to seminatural vegetation (herbaceous and woody), (3) shrublands to forests and (4) vegetation to artificial lands.
We evaluated the accuracy of land cover and land cover change maps obtained for each year and then land cover change products. The average overall accuracy of the land cover maps was about 90%. We obtained the highest user and producer accuracies above 95% for forests and water, with artificial land and croplands slightly lower about 80 to 86%. Classification uncertainty was lower in more heterogeneous landscapes, e.g., northern Carpathians. We based the accuracy assessment on the stratified random sampling for the classification accuracy assessment, with strata based on the land cover categories. We did not implement proportional allocation and increased the sample size for rarer classes.
Over 35-years, the forest cover and proportion of artificial lands in Central Europe have increased. At the same time, the croplands and grasslands areas declined.
We conclude that our approach provides a useful template for large scale mapping and assessment of land cover dynamics. Our land cover dataset can be used for various potential applications and many areas of environmental impact assessment and management.
Acknowledgements
We gratefully acknowledge support by the National Science Centre, project TRACE [project no. 2018/29/B/ST10/02979]. Global Land Programme contributing project.
High quality field reference data are particularly essential in modern machine learning agri-environmental and crop monitoring algorithms (Elmes et al 2020). Not only to train, but also to validate land cover maps and estimate crop areas (Olofsson et al 2014, Stehman & Foody 2019). Such field information is available through the Pan-European LUCAS survey campaign conducted on a three-yearly basis since 2006 (Eurostat 2021). Recent research have improved the update of this data for EO applications, through semantic product and nomenclature comparison (Buck et al 2015), harmonizing the LUCAS micro data sets from 2006-2018 (d’Andrimont et al 2020) and using the LUCAS Copernicus survey module to transfer the essential LUCAS point information to polygon reference labels (d’Andrimont et al 2021).
Extending these approaches we have applied a Convolutional Neural Network (CNN) approach, using the Python programming language and the libraries Keras and Tensorflow.
The CNN model is based on an operational crop photo assessment workflow that is applied to optimize field data campaigns as part of the Common Agricultural Policy (CAP) monitoring for the European Commission (Haub 2019). It was based by that time on more than 70,000 geotagged and labeled crop photos from on-site inspections conducted by us between 2012-2019 in Germany with 45 epochs, resulting in an overall model accuracy of 90%. Based on this CNN, a web-based prototype has been set up, which is available for demonstration here.
We have further developed and applied this approach to identify and label crops in the applicable LUCAS field photos (cardinal direction photos looking N-E-S-W) from 2006-2018. Using this information we filtered and qualified the LUCAS field information to enhance the “machine readability” and application potential for Earth Observation and Sentinel satellite data-based classifications. This includes the annotation of land cover in the LUCAS point vicinity and construction of reference polygons extending into similar land cover directions. The approach is currently extended to encompass further crop classes at the European scale using the LUCAS database. In our talk we will present the output of our current work to enhance the LUCAS crop information for national and continental EO application. Our work is embedded into a study to analyse and improve the transferability of satellite-based Artificial Intelligence models in space and time (Uebersat).
### References
Buck, Oliver, Carsten Haub, Sascha Woditsch, Dirk Lindemann, Luca Kleinewillinghöfer, Gerard Hazeu, Barbara Kosztra, Stefan Kleeschulte, Stephan Arnold und Martin Hölzl. 2015. Task 1.9 - Analysis of the LUCAS nomenclature and proposal for adaptation of the nomenclature in view of its use by the Copernicus land monitoring services. Service contract report No. 3436/B2015/R0-GIO/EEA.56166. Copenhagen: European Environment Agency (EEA). http://land.copernicus.eu/user-corner/technical-library/LUCAS_Copernicus_Report_v22.pdf.
d’Andrimont, Raphaël, Momchil Yordanov, Laura Martinez-Sanchez, Beatrice Eiselt, Alessandra Palmieri, Paolo Dominici, Javier Gallego, u. a. 2020. Harmonised LUCAS in-situ data and photos on land cover and use from 5 tri-annual surveys in the European Union. arXiv:2005.05272 [stat] (11. Mai). http://arxiv.org/abs/2005.05272 (zugegriffen: 16. September 2020).
d’Andrimont, Raphaël, Astrid Verhegghen, Michele Meroni, Guido Lemoine, Peter Strobl, Beatrice Eiselt, Momchil Yordanov, Laura Martinez-Sanchez und Marijn van der Velde. 2021. LUCAS Copernicus 2018: Earth-observation-relevant in situ data on land cover and use throughout the European Union. Earth System Science Data 13, Nr. 3 (19. März): 1119–1133. doi:10.5194/essd-13-1119-2021, .
Eurostat. 2021. LUCAS - Land use and land cover survey. 15. November. https://ec.europa.eu/eurostat/statistics-explained/index.php?title=LUCAS_-_Land_use_and_land_cover_survey.
Elmes, Arthur, Hamed Alemohammad, Ryan Avery, Kelly Caylor, J. Ronald Eastman, Lewis Fishgold, Mark A. Friedl, u. a. 2020. Accounting for Training Data Error in Machine Learning Applied to Earth Observations. Remote Sensing 12, Nr. 6 (23. März): 1034. doi:10.3390/rs12061034, .
Haub, C., 2019. 2 years IACS monitoring pilots - selected German cases: approach, results and way ahead. https://marswiki.jrc.ec.europa.eu/wikicap/images/f/f9/05_new_v2.4_2019-11-26_EFTAS.pdf. Available at: https://marswiki.jrc.ec.europa.eu/wikicap/index.php/Prague_2019 [Accessed November 22, 2021].
Olofsson, Pontus, Giles M. Foody, Martin Herold, Stephen V. Stehman, Curtis E. Woodcock und Michael A. Wulder. 2014. Good practices for estimating area and assessing accuracy of land change. Remote Sensing of Environment 148 (Mai): 42–57. doi:10.1016/j.rse.2014.02.015, .
Pflugmacher, Dirk, Andreas Rabe, Mathias Peters und Patrick Hostert. 2019. Mapping pan-European land cover using Landsat spectral-temporal metrics and the European LUCAS survey. Remote Sensing of Environment 221 (Februar): 583–595. doi:10.1016/j.rse.2018.12.001, .
Stehman, Stephen V. und Giles M. Foody. 2019. Key issues in rigorous accuracy assessment of land cover products. Remote Sensing of Environment 231 (September): 111199. doi:10.1016/j.rse.2019.05.018, .
Crop classification task is still have a big number unsolved problems that require new methods and instruments for the achieving maximum reliability of mapping. Some of these problems can partly be solved by use of state-of-the-art computer vision techniques that give possibility to build very accurate land cover and crop type maps. Such methods already showing a good performance on the small experiment sites all around the world. However, the use of such methods on the regional or even country level is still very challenging task. And this challenge even not in the use of big number of computational resources. Modern convolutional deep learning methods are require training data in special format. Usually, it is manually fully labeled squares of fixed size.
In terms of data collection forming for the crop classification task, the biggest problem is impossibility of accurate photointerpretation of crop features for the training data labelling. The real-life crop profiles are having very high variativity of features and analysis of NDVI sequences or just visual analysis of true color or any other satellite bands combination will not be accurate or reliable. So, the only way to form a good training or validation dataset is ground survey on the territory of interest’ roads. This fact also creates another problem – the real-life distribution of crop types on land is not uniform. The resulting data sets are very unbalanced in terms of machine learning. It is common situation, when most ground truth samples are representing majoritarian crop classes, while minoritarian classes can be represented only in a few samples. In the pixel-based classification such problem can be fixed in various ways. The most common method is usage of fixed number of pixels read from satellite data for each field to make the distribution of pixels in each classes uniform and balanced. Or another way is extending the number of pixels for minoritarian classes by simulation of new values based on the available with addition of random noise to these values for dispersion control and overfitting avoidance. However, such approaches are not working with convolutional neural networks. If ground truth data collection contains thousands of fields, it is possible to estimate millions of pixels from moderate or high resolution satellite data for pixel-based classification. But in the same way they can cover only a few hundreds of fully labeled squares for the segmentation task. This why, in the task of crop classification, the development of robust ground truth data simulation methods is very promising.
In this work we are presenting a new method of synthetic training data generation for the crop classification task based on the deep Generative Adversarial Network (GAN). This method uses computer vision approach – image to image translation. We trained the GAN neural network based on the available ground truth data to generate time-series of VV and VH polarization bands of Sentinel-1 based on the segmentation masks with 256x256 size. The resulting model give us possibility to simulate realistic images with different distributions of minoritarian classes. Combination of simulated by this method data with real data gave us possibility to estimate better recognition of minoritarian classes on crop classification maps build with use of U-net - deep convolutional neural network architecture.
Latest EO applications promote digitalization in agriculture, specifically the collection of agroecological data in terms of Big Data through remote sensing, sensor networks, and other geospatial data. The use of remote sensing applications in agriculture is manifold and covers a broad spectrum of topics from crop identification, biomass estimation to assessments of soil properties like pH, moisture, and clay content. Delivered as digital solutions with near real-time processing, remote sensing-based information can be used as a tool for decision making at multiple scales, from subplots (e.g., management zones) to regional and global scales for farmers, agribusiness, scientists, and policy makers.
However, the valorization of remote sensing data in agriculture currently reveals several challenges. The development of remote sensing products is closely intertwined with sufficient access to ground truth information to improve product quality, accuracy, and reliability of products and thus is relevant for the acceptance of such applications.
The large variety of remote sensing data with diverse properties is often countered by limited access to ground truth information. There is a great need to connect agricultural research networks and databases to facilitate information access and flow between different disciplines in the context of sustainable and future-oriented agriculture. The need for FAIR field data and closer linkage of remote sensing data can improve the predictive value of remote sensing products for sustainable natural resource use.
With the InsituDB we have launched a complete digital data framework to capture ground truth information. Starting with offline data acquisition in the field, asynchronous transfer of the data into the data portal and processing of the data via standardized communication protocols, as well as the final dissemination of information via standardized open-access web service interfaces and visualizations.
Data acquisition in the field is divided into three distinct, independent, but mergeable surveys covering the methodological compartments - biophysical, soil and spectral parameters of agricultural production. The collection strategy is oriented towards international EO initiatives, like JECAM- or ESA-FRM4Veg. The sampling design of this framework enables data collection by a broader community, such as farmers, students, researchers and interested citizens. In addition, the use of the cross-platform multilingual survey tool opens access for other interested partners from society and science in the spirit of the Citizen Science idea. InsituDB consists of four parts. Part one collects agroecological data directly in the field by entering measurements and estimations directly into corresponding input masks of the three surveys available. Part two includes optional data entry of laboratory measurements, which are typically completed after collection in the field by analysis of samples in the laboratory. Part three covers data transfer from the field instruments to the storage and processing server, which is fast, reliable, and redundant to minimize data loss. The fourth part of InsituDB is the visualization and dissemination component, where raw data is quality checked, processed, aggregated, visualized, and prepared for download. Consequently, this enables a wide range of applications in the context of precision agriculture and near-real-time validation of remote sensing data.
The digital data management of the InsituDB approach from the field to the data portal for standardized data collection, processing, and provision of agroecological information minimizes the steps and time required between data collection, information provision and knowledge transfer. The core component of our framework provides datasets according to open-access and FAIR principles, offering the advantage of making this information available and usable for various applications in a timely manner. Providing the data in state-of-the-art data exchange formats, such as CSV, JSON or Web-Mapping services increases interoperability and use in multiple applications. The freely available provision of aggregated datasets, as well as the low-threshold access to validated raw data further expands the utilization of the datasets. Visualization and access are accomplished through innovative geospatial technologies, including timely quality control.
The InsituDB platform demonstrates how highly specific scientific data collected through a complex sampling design can be used in a variety of ways and offered to different users and stakeholders through state-of-the-art data management and visualization techniques. The development and integration of the InsituDB platform in scientific research and teaching concepts at university level benefits the education of young scientists.
Sampling strategies for land vegetation should be developed to capture the spatial and temporal dynamics of vegetation/plants. The spectral distortion on the signal received at sensor level due to the optical path across vegetation structures and temporal changes, both diurnal and seasonal trends, must be evaluated for a correct interpretation of the retrieved vegetation traits. In this work, the leaf and canopy reflectance variability in the PRI spectral region (i.e., 500 – 600 nm) is quantified using different laboratory protocols that consider both instrumental and experimental set-up aspects, as well as canopy structural effects and vegetation photoprotection dynamics. Current rapid technological improvement in optical spectroradiometric instrumentation provides an opportunity to develop innovative measurements protocols, where the remote quantification of the plant physiological status can be determined with a higher accuracy in close-range remote sensing approaches. We studied how an incorrect characterization of the at-target incoming radiance is translated into an erroneous vegetation reflectance spectrum and consequently resulting in an incorrect quantification of PRI. Our results corroborate the hypothesis that the commonly used method to estimate the at-target surface incoming radiance with a horizontal white reference panel, produces a bias between the real photosynthetic plant surface reflectance factor and the remotely estimated top-of-canopy reflectance factor. The biased characterization of the at-target incoming radiance translated respectively into a 2% overestimation and a 31% underestimation of chlorophyll content and PRI-related vegetation indexes. We then investigated the dynamic xanthophyll pool and intrinsic Chl vs. Car long-term pool changes affecting the PRI spectral region. For leaf and canopy experiments consistent spectral behaviours were observed. The plants adapted to the sun showed a higher optical change in the 500 – 600 nm spectral range and a higher capability for photoprotection during the light transient time when compared to shade-adapted plants. The results of this work highlight the importance of well-established spectroscopy sampling protocols for detecting subtle spectral features in remote sensing studies.
Actual and accurate maps of crop types are important information needed in many operational scenarios to help monitoring environment and shaping agricultural policies. They are also a necessary step in many further analyses, e.g. crop yield prediction, drought monitoring, field abandonment detection due to conflicts or migrations, etc. To produce such maps on large areas, like country-wide or regional scales, a satellite data are used as they constitute relatively cheap and efficient way to achieve highly accurate and time-consistent results. It is especially evident in the regions where other sources of crop information (e.g., governmental statistics) are sparse. To achieve the best results of crop type mapping using satellite imageries Machine Learning (ML) methods are used, among which the supervised approaches are reported to outperform the unsupervised ones. Supervised methods require a representative dataset of reference samples, which are used to train models being able to map full scenes. Such reference data is traditionally collected using in-situ campaigns, which usually involves manual work of enumerators who need to visit different parts of an area of interests to geo-localise a significant number of fields for each considered crop type. This work is costly and time consuming, so we investigate here different approaches aiming at a reduction of manual data collection efforts. It includes: (1) a utilisation of drone data together with manual data collection campaign to automatically extract additional reference samples, and (2) an employment of training samples from the same area but acquired during other year/season or collected for other areas within the same eco-climatic region. The extended training datasets obtained using abovementioned methods are next used as an input to ML-based classification system, which uses Sentinel-2 time series data to map large areas based on Random Forest classifier. The performance of extended dataset is compared to the performance obtained using original dataset collected by enumerators during field inspections.
Our experiments are conducted using four datasets consisting of both drone orthomosaics and reference shapefiles with crop type information. Two of them were collected for Kasungu district in Malawi: (A) the first one acquired in May 2018, the second in September 2018. The other two datasets were collected in Gaza province of Mozambique (in May 2019 and May 2021). Each of the datasets consists of about 30-40 RGB orthomosaics covering about 0.1 to 1 sq km with ground resolution varies between 1 and 7cm. The orthomosaics were acquired at different part of a day, so different light and shadows characteristics are present. Other important issues include presence of weeds, significant component of harvested fields, early stages of plant development and mixed crops. In all datasets Maize is a dominant crop type and other major crop types varies, but usually include Cassava, Cow peas, Groundnuts and Rice.
The first method that we investigate for the reduction of manual work dedicated to field campaign uses drone data covering areas where enumerators were sent to collect referenced polygons with crop type information. Using collected reference polygons, we train a classification model which can classify whole drone scenes into several non-crop and several crop classes. Non-crop classes have been added to properly detect crop mask. By experimenting with different number of polygons forming training dataset we show how far we can reduce the number of manually collected crop polygons to preserve the assumed classification accuracy of Sentinel-2-based large-scale classifier thanks to the additional samples detected on drone images. For classification of drone images, we use convolutional neural networks (CNNs) pretrained on computer vision datasets like ImageNet. This allows to take full advantage of visual contextual information analysed in this kind of networks, which is particularly suitable for the problem of detecting crop types on image data with centimetres resolutions, where plant structures are visible together with surrounded bare soils and/or weeds. As we have shown in our previous work, without the knowledge transferred from computer vision, the networks could not be trained from scratch to produce reliable results. On the other hand, applying simple shallow networks or Random Forest for detecting crop types on such drone data perform much worse than pretrained CNNs.
The second approach that we investigate is based on the same pretrained CNNs architectures and uses datasets which are distant in time and/or space. In this scenario the drone data classification model previously trained on other campaigns are fine-tuned using data from a current reduced in-situ campaign. We investigate the level of a reduction of a collected ground truth information that allows to maintain an assumed level of accuracy of Sentinel-2-based classification.
The presented work compares different approaches for the reduction an amount of ground truth data. Results could be further combined with cost models to plan campaign strategies which are the most economically efficient.
Information of forest growth is important for a wide range of environmental applications, from estimations of terrestrial carbon cycle globally, to sustainable forest management. Moreover, assessing forest growth allows us to understand the role (increase our knowledge) of the related ecosystems goods and services. The traditional way to measure forest growth is via field measurements, at the expense of a vast use of resources (such as time and cost). When scaling up to large areas, forest growth is usually characterized using long-range remote sensing instruments, such as EO satellites or airborne LiDAR. Thus, providing an estimate of forest growth at large scale. Nevertheless, these techniques fail to capture the complete vertical structure estimation of forest stands. Terrestrial laser scanning (TLS) provides a non-destructive characterization of the structure of forests in a non-destructive way.
Terrestrial LiDAR is a powerful tool for assessing forest structure, allowing us to capture the three dimensionality of forest stands in a level of scrutiny not achievable using established non-destructive (even destructive) techniques. This potential has been widely presented in related literature, where the main focus has been estimating easy-to-measure parameters, such as diameter at breast height (DBH), or height. However, thanks to repetitive measurements TLS has the potential - to provide forest structural parameters together with dynamics at a high level of detail, not only at a plot-level biomass, but at individual trees and even individual branches.
Our study area is located in Loobos, an evergreen coniferous forest in the Netherlands. Two TLS fieldwork campaigns were done in 2011 and 2021 with a RIEGL VZ-400 terrestrial LiDAR. Both datasets were aligned onto each other and objects were semi-automatically extracted. Further, leaves from trees were digitally removed and Quantitative Structure Models (QSM) were used to model the main stem and main branches. Our results show that volume growth can be estimated directly from the point clouds and branch growth can be detected from modelled branches. We have demonstrated that changes in tree structure and growth can be effectively detected and estimated from LiDAR scans.
Radiata pine (Pinus radiata D. Don) is the most widely planted exotic pine species in the world and large areas have been established, especially in the Southern Hemisphere. In New Zealand, radiata pine is the dominant plantation species and constitutes 90% of the currently 1.7 Mha plantation area. Up to 2.8 Mha of new forest must be planted until 2050 to achieve the country’s carbon-neutral target. Climate change brings great risk to future forest productivity in New Zealand as trees have non-linear growth responses to changing carbon dioxide concentrations, air temperature and water stress. Research clearly shows that different radiata pine genotypes vary markedly in their environmental preferences. In order to match the genotype to each site, a phenotyping platform for the trees is being developed.
This platform includes measurements at several scales. Reference measurements of needles and the trees itself include needle reflectance, needle pigment and nitrogen contents, photosynthesis parameters, and structural tree parameters. Hyperspectral reflectance measurements of potted trees are conducted using a camera installed on a 2 m high fixture and tree pots on a conveyor belt. Plantation trees are regularly measured using unmanned aerial vehicles (UAVs) equipped either with a hyperspectral camera or a laser scanner.
The data are used to characterize plant health, especially needle water and pigment content, and infections with dothistroma and red needle cast. Measurements of nutritional deficits, biochemical limitations on photosynthesis and of long-term effects of water stress using hyperspectral data have been successful (Watt et al., 2019, 2020, 2021).
Additionally, plant growth and structure is measured using spectral and laserscanning data. First results show strong correlations between the UAV hyperspectral data and structure parameters measured by laser scanning. Thus, close-range remote sensing is considered a powerful tool for rapid phenotyping of radiata pine plantations and the methods will be transferred to large areas.
References:
Watt, M.S., Pearse, G.D., Dash, J.P., Melia, N. & Leonardo, E.M.C. (2019): Application of remote sensing technologies to identify impacts of nutritional deficiencies on forests. ISPRS Journal of Photogrammetry and Remote Sensing, 149, 226-241, http://doi.org/https://doi.org/10.1016/j.isprsjprs.2019.01.009
Watt, M.S., et al. (2020): Monitoring biochemical limitations to photosynthesis in N and P-limited radiata pine using plant functional traits quantified from hyperspectral imagery. Remote Sensing of Environment, 248, 112003, http://doi.org/https://doi.org/10.1016/j.rse.2020.112003
Watt, M.S., et al. (2021): Long-term effects of water stress on hyperspectral remote sensing indicators in young radiata pine. Forest Ecology and Management, 502, 119707, http://doi.org/https://doi.org/10.1016/j.foreco.2021.119707
Street-level imagery holds an immense potential to scale-up in-situ data collection. This is enabled by increasing computing resources and cheap high quality cameras along with recent advances in deep learning. We present a framework to collect and extract crop type and phenological information from street level imagery using computer vision. During the 2018 growing season, high definition pictures were captured with side-looking action cameras in the Flevoland province of the Netherlands. Every month from March to October, a fixed 200-km route was surveyed collecting one photo per second resulting in a total of 400,000 geo-tagged photos. In 220 specific parcels, corresponding to 200,000 photos, detailed crop phenology observations were done for 17 crop types: carrots, green manure, grassland, maize, onion, potato, summer barley, sugar beet, spring cereals, spring wheat, tulips, vegetables, winter barley, winter cereals and winter wheat. Classification was done using TensorFlow with a number of well-known image recognition models, primarily based on transfer learning with convolutional neural network modules (MobileNet). A hypertuning methodology was developed to obtain the best performing model among 160 models. This best model was applied on an independent inference set discriminating crop type with a Macro F1 score of 88.1% and main phenological stage at 86.9% at the parcel level. Potential and caveats of the approach along with practical considerations for implementation and improvement are discussed. The proposed framework speeds up high quality in-situ data collection and opens avenues for massive data collection via automated classification using computer vision.
Meeting the needs of nature protection in the management of forest stands and agricultural lands and their adaptation to climate change requires precise information about (micro)climatic conditions. This information can be very efficiently derived through Earth observation (EO) data, including satellites, unmanned aerial systems (UAS), aircrafts, etc., or ground measurements. However, there is still a lack of knowledge about filling the gap between these scale-different approaches and their fusion for long-term monitoring of climate change mitigation.
Our testing site allows us to obtain ground truth information from dozens of sensors capturing local temperatures, evapotranspiration, ground wetness, groundwater levels, tree biometric parameters, and others on the study area of more than 500 ha. The locality also has an advanced irrigation system that allows us to control the amount of groundwater. Our project uses those ground data together with precise hydrological, pedological, and botany knowledge of locality to evaluate the temporal dynamics of landscape and microclimate. We observe a change of locality under upcoming climate change and (micro)climate change caused by various newly built landscape elements (e. g., avenues, ponds) or different agronomical practices, whose task is increasing landscape resilience.
The project aims to connect our ground information with plenty of remote sensing data from satellites (mainly Sentinel-1-2-3), UAS-borne, and airborne (multispectral, thermal, LiDAR, hyperspectral) sensors to estimate key (micro)climatic parameters. We also observe and predict the reaction of the landscape to different climatic changes. The goal is to develop a set of models to estimate key climatic parameters which will be reliably applicable to broad landscape types. We will also extrapolate our findings to predict the impact of climate change on the central European landscape.
We present a new hand-held system, LITERAL, capable of accurately measuring several crop characteristics and can be used for validation activities of satellite products. Conversely to IOTs that allow focusing on the temporal monitoring of crop status while they represent a very small part of the canopy due to their reduced footprints, LITERAL is hand held and therefore allows a more exhaustive spatial coverage of the crop. It can be considered as the new generation of traditional devices such as Digital Hemispherical photography, LAI2000, or ACCUPAR that are currently used to measure Green Area Index. It meets the need for economical, easy-to-use but precise measuring means for monitoring trials in small plots or a network of agricultural plots. LITERAL is also capable to derive several important crop characteristics such as cover fractions, Plant, Green and Senescent area indices, as well as plant or ears density, and crop height.
LITERAL is a probe equipped with three high resolution RGB cameras. All the sensors are connected to the acquisition unit that triggers the cameras, stores the data and communicates with a tablet PC which allows designing the measurement protocols via a user-friendly graphical interface. It minimizes the user intervention to allow fast acquisition in the field. The data are automatically analyzed by post-processing advanced algorithms: semantic segmentation, object detection by deep learning, colorimetric analysis, and stereovision. These algorithms are configured per crop in order to obtain maximum traceability and precision. The quality of the images and the multiple possible configurations allow LITERAL to be used for many purposes: monitoring the growth of field crops, characterization of mixed crops, quantification of symptoms of leaf diseases, measurement of the density of plants or ears. Ergonomic and scalable, LITERAL is currently used by technical teams in France, Portugal and Australia for phenotyping applications during the last two years. Wider dissemination as well as other applications are planned from 2022 and other applications.
Estimation of aboveground biomass in clover-grass mixtures using UAV-based vegetation indices and canopy height
Konstantin Nahrstedt (1), Tobias Reuter (2), Dieter Trautz (2), Thomas Jarmer (1)
1 Institute of Computer Science, Osnabrück University, 49090 Osnabrück, Germany, konstantin.nahrstedt@uni-osnabrueck.de, thomas.jarmer@uni-osnabrueck.de
2 Faculty of Agricultural Sciences and Landscape Architecture, University of Applied Science Osnabrück, 49090 Osnabrück, Germany, tobias.reuter@hs-osnabrueck.de, d.trautz@hs-osnabrueck.de
Clover-grass mixtures are used in agriculture primarily as forage crop and natural source of nitrogen for subsequent crops. However, clover-grass stands are characterized by high spatial heterogeneity in terms of species composition and biomass. Especially for developing appropriate management recommendations biomass is an important parameter to quantify plant stand structure. Currently, the determination of biomass in clover-grass fields is still often performed with laborious manual measurement methods. UAV-based image data offer an efficient possibility for multi-temporal monitoring of field structure development, in which plant parameters can be estimated in high temporal and spatial resolution. Based on this, recommendations for field management can be issued at different phenological times.
For this purpose, drone-based multispectral images were acquired at regular intervals on an organic managed clover-grass field in Osnabrück (Germany). For the quantitative evaluation of the phenological development of clover-grass stands, images were acquired during three flights between the second and third cut. Destructive in-situ biomass measurements were collected at each time stamp. To model the field-measured biomass multispectral vegetation indices were calculated. Normalized Difference Vegetation Index (NDVI) and Ratio Vegetation Index (RVI) are suitable indices to work out different structures in plant crops and were therefore used to estimate (fresh matter) biomass in this study. Furthermore, Structure-from-Motion (SfM)-based canopy height was included as modeling parameter. The suitability of spectral and spatial information for biomass estimation was tested by contrasting the model performances of each parameter. In addition, it was tested whether a combination of different parameters provided better biomass predictions. The biomass was first modeled by a multitemporal approach with all recording dates. Subsequently, temporal effects were analyzed by calculating regression models for each individual date.
The multispectral indices performed similar with an R² of 0.61 (NDVI) and 0.64 (RVI) for the combination of all acquisition dates while the SfM-based biomass estimation exhibited high modeling quality (R² = 0.73). A combination of these parameters also provided added value, since spatial and spectral information were merged. Thus, both the individual plant growth and the reflectance behavior are taken into account in the evaluation of crop development. An R² of 0.76 showed a high accuracy in estimating clover-grass biomass using multispectral indices and SfM-based canopy height in combination. The investigation of individual time steps of biomass estimation showed that the choice of the recording date has a clear impact on prediction quality. At the first recording date, the experimental field was characterized by a high degree of spatial heterogeneity, since the vegetation cover had not yet closed by this time. In particular, soil segments influenced the reflection signal and reduced biomass estimation accuracy. This is contrasted with higher estimation accuracy at the recording time just before harvest. Spatial analysis of predicted biomass based on UAV image data showed a lower amount of biomass in the first imaging date than in subsequent dates as well as a heterogeneous distribution over the entire study site in all acquisition dates. Especially with regard to the multitemporal approach, the results of this study can be used as basis for issuing a site-specific management recommendation for field management by transferring model predictions to UAV imagery data.
In the context of HYPERNETS project, which has developed the relatively low cost hyperspectral radiometer HYPSTAR® (and associated pointing system) for automated measurement of water and land bidirectional reflectance, the tidal coastal marsh in the Mar Chiquita (Argentina) lagoon was characterized as a test site for validation of radiometric variables [Piegari 2020]. This siteis a coastal habitat that provides ecosystem services essential to people and the environment [Assessment, Millennium Ecosystem 2005] and the vegetation is dominated by Sporobolus densiflorus [Trilla 2016]. There is evidence that growth and photosynthetic apparatus of this species is negatively affected by the herbicide glyphosate [Mateos-Naranjo 2009], which is extensively used in the Argentinean agricultural production [Aparicio 2013]. Previous studies have shown the potential of remote sensing to monitor plant injury from glyphosate using hyperspectral data [Huang 2012, Zhao 2014]. In particular, NDVI (Normalized Difference Vegetation Index) and PRI (Photochemical Reflectance Index, an indicator of photosynthesis activity) are spectral indices typically used to evaluate plant conditions. In this context, the HYPSTAR® instrument will provide high quality in situ reflectance,at fine spectral resolution (10 nm FWHM) in the 400-1700nm range with automated measurements every 30 mins, useful for the validation of surface reflectance data from all present and future earth observation missions and to monitor the health status of the vegetation. Thus, allowing to further explore if herbicides effects can be detected using spectral indices (designed for green vegetation) in natural environments characterized by the presence of a large fraction of standing litter - such as the Buenos Aires Atlantic coastal marshes.
In this study we sought to determine if it is possible to detect the effect of glyphosate on S. densiflorus spectral response using chlorophyll fluorescence and hyperspectral data. To achieve this, samples of S. densiflorus adult clumps were taken in Punta Piedras (35°34'40.1"S 57°15'11.9"W Buenos Aires, Argentina). Clumps of about 18 cm diameter were planted in 21 individual plastic pots, with a diameter and height of 24 cm and 28 cm, resp., filled with marsh soil. Pots were randomly separated in three sets (seven pots per treatment) according to two doses of Glyphosate-based herbicide (GlacoXAN TOTAL; 43.8 g active ingredient/100 ml, Argentina) with 876 g a.i./ha and 7200 g a.i./ha and an untreated control. The herbicide was administered homogeneously over the leaves surface, early in the morning and in absence of wind, using a pulverizer (250 ml of spray volume). Photosynthetic parameters were acquired in random fully developed leaves attached to the plants using a portable fluorometer PAR-FluorPen FP 100-MAX-LM of Photon Systems Instruments (Czech Republic). Leaves were dark-adapted for 20 min and then measurements were performed following the OJIP protocol [Stirbet 2011]. Radiometric measurements were obtained using a field spectrometer FieldSpec3® Analytical Spectral Devices (ASD), Inc. (Boulder, Colorado), which covers the spectral range between 350 and 2500 nm. Reflectance spectra at leaf level were acquired with a Plant Probe and Leaf Clip accessories (ASD) and with 7 pots per scene at canopy level were carried out, swapping the pots so that each of the 7 were placed in the center to generate 7 different scenes per treatment.
The photosynthetic parameters derived from OJIP test and reflectance measurements at leaf and canopy levels were obtained for the three treatments 1, 8 and 15 days after treatment (DAT). Analysis of variance (ANOVA) tests were performed together with a LSD Fisher test to evaluate significant differences. Results show that by means of photosynthetic parameters and spectral indices, crop injury of glyphosate in S. densiflorus could be early detected. The maximum quantum yield of photosystem II (Fv/Fm), which is considered to be a sensitive indicator of plant photosynthetic performance, shows differences between the control and low and high dose treatments (p < 0.05).A significant decrease in Fv/Fm respect to the control is observed for low and high dose treatments, at 8 DAT and 1 DAT, respectively. Among the several spectral indices that were tested as indicators of glyphosate injury at leaf and canopy levels, it is highlighted that changes in PRI at canopy level are detectable 15 DAT for both low and high doses (p < 0.05). In the frame of hyperspectral missions, like PRISMA and the more dedicated high-resolution Fluorescence Imaging Spectrometer (FLORIS) planned on FLEX mission, these results are promising for the early detection of loss of marsh vegetation from remote sensing.
References
Aparicio, V. C., De Gerónimo, E., Marino, D., Primost, J., Carriquiriborde, P., & Costa, J. L. 2013. Environmental fate of glyphosate and aminomethylphosphonic acid in surface waters and soil of agricultural basins. Chemosphere, 93(9), 1866-1873.
Assessment, Millennium Ecosystem. 2005. Ecosystems and Human Well-being: Wetlands and Water, 5. Washington, DC: World Resources Institute.
Huang, Y., Thomson, S. J., Molin, W. T., Reddy, K. N., & Yao, H. 2012. Early detection of soybean plant injury from glyphosate by measuring chlorophyll reflectance and fluorescence. Journal of Agricultural Science, 4(5), 117.
Mateos-Naranjo, E., Redondo-Gomez, S., Cox, L., Cornejo, J., Figueroa, M.. 2009. Effectiveness of glyphosate and imazamox on the control of the invasive cordgrass Spartina densiflora. Ecotoxicology and Environmental Safety, 72(6), 1694 – 1700.
Piegari, E., Gossn, J. I., Juárez, Á., Barraza, V., Trilla, G. G., & Grings, F. 2020. Radiometric Characterization of a Marsh Site at the Argentinian Pampas in the Context of Hypernets Project (A New Autonomous Hyperspectral Radiometer). IEEE Latin American GRSS & ISPRS Remote Sensing Conference (LAGIRS) (pp. 591-596). IEEE.
Stirbet A, Govindjee. On the relation between the Kautsky effect (chlorophyll a fluorescence induction) and Photosystem II: basics and applications of the OJIP fluorescence transient. 2011. J Photochem Photobiol B., 104(1-2):236-57.
Trilla, G. G., Pratolongo, P., Kandus, P., Beget, M. E., Di Bella, C., & Marcovecchio, J. 2016. Relationship between biophysical parameters and synthetic indices derived from hyperspectral field data in a salt marsh from Buenos Aires Province, Argentina. Wetlands, 36(1), 185-194.
Zhao, F., Huang, Y., Guo, Y., Reddy, K.N., Lee, M.A., Fletcher, R.S., Thomson, S.J.. 2014. Early Detection of Crop Injury from Glyphosate on Soybean and Cotton Using Plant Leaf Hyperspectral Data. Remote Sens.,6, 1538-1563.
Due to the technological development and advantages of UAV-based remote sensing solutions, new possibilities arise for monitoring agricultural crops while efficiently sensing crop biophysical parameters (CBP) in a close-range scenario. The approach presented here attempts to highlight the capabilities of a data fusion approach that combines LiDAR (RIEGL - miniVUX-1UAV) and multispectral data (Micasense - Altum) to assess CBPs like dry above ground biomass (AGB) for maize, which is one of the worldwide most cultivated crops. Due to its canopy structure, it is relatively hard to assess plant parameters, especially at later phenological stages where usable parts such as cobs are hidden. The combination of both LiDAR and multispectral data not only allows the estimation of AGB, but also helps to evaluate phenological stage-specific growth and vitality parameters. This could help farmers either directly via close-range monitoring or indirectly via Earth Observation (EO) missions, powered with close-range UAV-based ground truth models to improve plant management at the macro scale.
With a relatively low flight height of 20m above ground for the LiDAR system (resulting in an average point spacing of approx. 0.05m), and 25 m for the multispectral system (GSD approx. 0.01m), the plant physiology and interrelated CBPs can be assessed with high precision, potentially taking into account different growth stages due to narrow flight date intervals. LiDAR derived information (mean range corrected single return intensity, mean return ratio (first return/all returns), and mean height) are then combined with multispectral related vegetation indices, using the six available spectral bands. In addition to the usual corrections an additional correction processing chain was developed and tested for the LiDAR data.
A support vector regression is applied on ground truth data from two appointments summing up to a total of 96 samples, with further additional validation of the created model. Resulting R² of more than 0.7 for the dry AGB, is a promising result for promoting non-destructive UAV-based ground truth solutions, to support spatial upscaling for EO missions.
Earth observation data acquired by the Copernicus satellites can be deployed for the frequent large-scale monitoring of vegetation parameters on agricultural fields. From the perspective of agriculture counsellors and farmers, an added value arises in particular, if those data are used to derive products, which address parameters that are relevant to management decisions. Such products should come along with a sound uncertainty assessment and management recommendations. For this, in-situ data, knowledge of processes, and local knowledge are required. In-situ data are mainly deployed to calibrate, update and validate satellite-aided retrieval models. Knowledge of processes and local knowledge provide a basis to develop management recommendations. In-situ data and local knowledge are collected in different projects from a wide variety of people, including scientists as well as co-researchers (e.g., citizens). Thus, the measurement and sampling design vary for measured and observed parameters, which causes variable data quality. This makes it difficult to merge data of the same kind from different projects and therefore hampers the reuse and automatic analysis of data. Modular designed and customizable applications for mobile devices (Apps) represent a framework that can help to foster the standardisation of data sampling methods and strategies. At the same time, they provide enough flexibility to be adjusted for the use in various scenarios.
The FieldMApp represents such a modular designed application. Its open structure permits the reuse of existing software components, customizable adjustments, and the addition of new modules that are necessary to fulfil specific requirements of a research project at hand. The FieldMApp was built exclusively on open-source libraries. It can be compiled to run on an Android or iOS operating system, allows for the integration of internal and external sensors, and is designed to work either in an online or offline mode. Acquired data are stored in a machine-readable format. The FieldMApp concept includes tools that may support data validation and uncertainty assessments. The overall uncertainty of acquired data is estimated by considering sources of systematic and random errors, which depend on the modular set-up. Accordingly, the FieldMApp provides mutually compatible data sets, thus increasing their reuse potential.
In this contribution, the concept and structure of the FieldMApp will be presented. The application of the FieldMApp within the frame of a use case of the project Agrisens – DEMMIN 4.0 will be demonstrated. In this use case, low yield areas on acreages were identified and characterised by farmers during on-site agricultural operations. The relevance of such data for agricultural management and the common agriculture policy will be outlined. Examples for other fields of application within the agricultural sector will be highlighted.
With the latest generation of earth observation sensors and advanced processing techniques, remote sensing technologies are increasingly enabling to assess not only land cover but also land use. This is in line with the current political agenda and the rising need for spatially explicit and regularly updated land use information to support environmental monitoring programmes. However, mapping land use based on satellite time series remains more difficult than mapping land cover. One key limiting factor is suitable reference data for model calibration and validation. For grassland management, which can consist of various activities throughout the year, high temporal in-situ information is needed to fully assess the extent to which management can be remotely assessed.
We used freely available and daily webcam images to investigate the extent and accuracy of grassland use captured by Sentinel-2 time series. For 57 webcams distributed across Switzerland, one to three reference locations each were defined and georeferenced, resulting in a total of 82 reference locations and around 27’000 daily interpretations of grassland use. We extracted and processed Sentinel-2 NDVI time series for those locations and developed an algorithm to detect main management events such as mowing or grazing.
Our findings show that management events represented within the NDVI time series were in most cases (>80 %) indeed related to mowing or grazing. In contrast, a large proportion of the mowing (around 40 %) and most grazing events (around 80 %) recorded on the webcams were not detected in the NDVI time series. Visual inspection of the NDVI time series revealed that grazing events often showed little to no signal, but in the case of mowing, most of the omitted events might be captured by fine-tuning the algorithm. The large omission error for grazing might be explained by the fact that many of our webcams showed extensively managed mountain pastures with a low stock density. In general, the density of clear-sky and snow-free observations seems to be essential, as NDVI values recovered within one to two weeks after mowing / grazing. Furthermore, mowing and intensive grazing could not be distinguished and suggest that significant drops in NDVI should be interpreted more generally as removal of biomass within a short period of time. In addition, first visual inspection indicates that fertilisation is not sufficiently reflected in the NDVI time series to be detected, even though such events were registered in one third of the webcams.
The comparison with daily webcam images proved to be useful for further improvements of the algorithm and to better understand limitations and possibilities of satellite-based grassland use assessment. Furthermore, this highly temporal reference data allow to test whether the integration of additional remote sensing data (e.g. Sentinel-1, PlanetScope) is beneficial.
The modernized Common Agricultural Policy 2023–2027 in the European Union highlights the need for Paying Agencies to perform checks on a much thinner time-scale and to quantify the impact of various practices on natural ecosystems. The introduction of participatory sensing and smart sensors enable the cost-effective establishment of an additional data layer to complement, validate and enhance the predictive performance of critical agro-environmental –related parameters. In particularly, the advent of low-cost, portable and handheld spectrometers operating in the electromagnetic spectrum and realized using microelectromechanical systems enables the rapid and non-destructive measurement of a soil’s reflectance spectrum.
To this end, we propose a methodology based on a set of handheld SWIR (1750–2150 nm) soil spectrometers for real time in situ estimation of soil properties, leveraging existing Soil Spectral Libraries (SSLs) and efficient deep learning techniques. This novel sensing system was tested under real field conditions in 180 fields during the summer of 2021. A collection of 240 distinct topsoil samples distributed in six different regions in Lithuania and Cyprus has been measured under both in situ and laboratory conditions. For the laboratory case, sample pre-processing (air-drying and passing through a 2mm sieve) for ambient factors’ effects removal has been performed. The acquired spectral signatures formed two sets over which a Convolutional Neural Network has been developed, aiming to eliminate the effects to the spectral signatures caused by moisture, shadow, or by the existence of non-soil materials through mapping the in situ spectral signatures to the ones acquired after laboratory pre-treatment. This technique eliminates the effects of ambient factors in spectral signatures and helped the development of a new dataset of “transformed” spectra. The spectra values of this dataset acted as predictors for the estimation of Soil Organic Carbon (SOC) which exhibited enhanced predictive performance (R2=0.80), which was evaluated over an independent test set containing 20% of total samples, compared to the model developed using as predictors the original in situ spectra (R2 < 0.2).
The proposed approach broadens the possibilities of merging collections of in situ spectra with existing SSLs and further highlights the need for development of a universally accepted sensing protocol. Furthermore, SOC or other soil properties that can be monitored with diffuse reflectance spectroscopy can be easily scaled up and act as bridge to Earth Observation data in terms of a bottom-up approach and in support of the Copernicus in situ component, under the hypothesis of reliable estimations of the targeted soil quality indices.
Agricultural landscape features are small elements of non-productive semi-natural vegetation embedded in agricultural landscapes. This definition includes several characteristic elements of traditional and historical European agricultural landscapes, such as hedges, ponds, ditches, trees in line or in group or isolated, field margins, terraces, dry-stone or earth walls, planted areas, individual monumental trees, springs, or historic canal networks. These elements had important functions linked to traditional agricultural management practices. In the 20th-21th century, some functions of landscape features have diminished: for example, rural populations are less and less reliant on hedgerows for fencing their livestock, or firewood from field coppices. Nevertheless, other functions, like windbreaks and erosion protection remained intact, and “new” functions, like the maintenance of agricultural biodiversity have also emerged instead. It is recognized that these small vegetation fragments have a key role in maintaining biodiversity and ecosystem services in the European agricultural landscapes. In fact, as agricultural areas occupy around 45% of EU27, landscape features have gained new importance in addressing the key environmental challenges of the 21st century. The role of landscape features in agricultural land is highlighted in several key strategies and directives of the EU policies, including for example the Common Agricultural Policy, the Biodiversity Strategy, the Water Framework Directive, or the Nitrate Directive. Accordingly, the share of landscape features could be a key indicator of the ecological condition of agricultural landscapes in the EU.
Despite their recognized importance, the mapping and monitoring of landscape features remained a challenge for several reasons. Landscape features are small heterogeneous objects with special characteristics that make their mapping difficult. Their reliable identification would be possible using very high resolution (VHR) data over large areas, preferably in multiannual time series, so that a key distinction between permanent and temporary features could be made. The definitions and typologies of the landscape features also need to be harmonised across policy sectors, in a way that is accessible for operationalization in remote sensing applications.
In the European Union (EU), a tri-annual surveyed sample of land cover and land use has been collected since 2006 under the Land Use/Cover Area frame Survey (LUCAS). Starting from the upcoming LUCAS 2022 survey, a new dedicated Landscape Features (LF) module will be implemented in the agricultural landscapes all over Europe, taking a statistically balanced subset of 93,000 sampling units from the overall LUCAS sampling frame. In each sampling unit, a fixed grid of 41 equally-spaced subpoints will be assessed for the presence of landscape features, in two steps. The first step is a visual interpretation of high-resolution orthophotos, and the second step is a field-based verification within the framework of the LUCAS field survey. The data collected will provide an extensive reference data set, which can then be used for an unbiased area estimation of the main landscape feature types in all EU Member States and their subnational level (NUTS2). Accordingly, this dataset will provide a robust sample of ground truth records based on an operative definition and a simplified functional typology of landscape features, which can then be used to implement efficient workflows for the future identification and mapping of landscape features in agricultural land.
The multi-temporal capabilities of tower-based experiments are an essential component to disentangle the multiple causes behind the temporal variability of the microwave backscatter. For the case of vegetation covers, it is typically often challenging to distinguish between dry or fresh biomass changes, in addition to wind-induced motion effects. Supported by quasi-continuous acquisitions (every 15min) of the TropiScat-2 experiment which is operating since 2018 over a tropical dense forest at the Paracou test site in French Guiana (as an heritage of the TropiScat one from 2011 to 2014), we show that the diversity of observations conditions enables to isolate and to characterize the main causes of backscatter variations, especially through the diurnal patterns driven by convective effects or through seasonal variations driven by dry or rainy periods up to 500 mm/month. These results have provided a key database for the design of the BIOMASS mission interferometric and tomographic repeat-passes, but are also very relevant to anticipate the best ways to interpret the future signal and products variations with respect to meteorological observations. Nonetheless, the applicability of these results at wider-scales on tropical forests rises several questions, especially with respect to the adequacy between the local observations derived from the Guyaflux meteorological sensors and the spaceborne observations on hourly basis with a much coarser spatial resolution (about 10km). Fostered by the need of operational concepts at the time of BIOMASS acquisitions, our study will focus first on the selection of the most relevant meteorological parameters explaining the P-band backscatter variation patterns derived from TropiScat-2 time series, and then on the comparisons between our Tower-based local measurements and the ERA5 datasets derived from the reanalysis ECMWF products combining data modeling and assimilation. Finally, the upscalling questions and challenges regarding the radar observations from TropiScat-2 at P, L and C bands will be also addressed, given signal reconstruction possibilities from tomography and the opportunities of cross-comparisons at C-band measurements with Sentinel-1 time series.
The Precursore Iperspettrale della Missione Applicativa (PRISMA) mission of the Italian Space Agency (ASI) is evolving to become a great scientific success and is already providing excellent hyperspectral datasets since the end of its commissioning phase in January 2020. With the upcoming launch of the German Space Agency (DLR) Environmental Mapping and Analysis Program (EnMAP), as well as ongoing progress at ESAs Copernicus Hyperspectral Imaging Mission (CHIME), the amount of spaceborne hyperspectral datasets provided will increase to yet unknown dimensions. For the first time both, quality, and availability (temporal and spatial) of datasets will meet researchers’ requests. Nevertheless, real-world applications based on the mentioned datasets call for robust and well-tested algorithms and models. These are typically developed using ground-based measurements, mostly acquired using hand-held hyperspectral imaging (HIS) or sampling (HSS) devices on field trials. Here, the training of young researchers in the field of hyperspectral spectroscopy comes into focus. Furthermore, a thorough understanding of the matter can only be achieved, by a personal, hands-on approach and collection of HIS/HSS data.
Hand-held HIS/HSS platforms need to perform within the following three categories:
- Accuracy and range of wavelength reproduction
- Portability and accessibility of the platform
- Low initial cost and reparability
Existing platforms often lack in at least one of the above-mentioned categories. Representing the HSS market leader, the ASD FieldSpec-4 Hi-Res system is taken as an example. It features excellent optical characteristics, both in spectral range, as well as resolution / bandwidth. But both the high weight, as well as the overall fragility of the optical system, make it a challenge for in-field applications. Also, with an initial price in the higher five-digit category and a closed-source design, it is far from being accessible for smaller institutions or individual researchers.
To simplify the optical system, microspectrometers (MSP) may be considered. MSPs, despite being more expensive than a ground-up development of the optical design (e.g., Salazar-Vazquez and Mendez-Vazquez, 2020), greatly reduce the complexity of the optical system, by representing a compact assembly of a slit, grating and sensor. Laganovska et al. (2020) and Sandak et al. (2020), both implemented the Hamamatsu C12666MA MSP for environmental applications under artificial light sources, using the sensor to analyse Vitamin B12/Phosphate content in samples and detection of wood defects. Sonobe et al. (2021) have successfully shown the capability of the C12880MA MSP for the estimation of chlorophyll content of Zizania latifolia (water sprout).
The object of our study is the development of a hand-held, low-cost hyperspectral VIS-HSS platform for educational and scientific applications. It is intended to approach all of the above-mentioned requirements. The Tinker Scanner (Tisca) consists of a 12 cm x 12 cm cube, housing the optical, electrical, support and communication systems. The compact, rugged design and low weight (300g), make it versatile, widely portable, and accessible. The current prototype comes at a cost of merely € 700.
The optical system consists of two Hamamatsu C12880MA MSP. These feature a spectral range of 340 nm to 850 nm at a peak resolution of 15 nm and approximately 1.8 nm spectral sampling. To reduce the viewing angle effects induced by the 50 x 500 µm entrance slit, fused silica diffusors are used. Tisca features two main modes of operation: single-spectrometry-mode (S0) for applications in laboratory settings and live-reflectance-mode (S1) for hyperspectral sampling in field-trial conditions. One MSP is located at the bottom of the cube, the other on the top side.
Whereas S0 represents a more conservative take on HSS, S1 enables researchers to measure both, incoming and target-reflected light simultaneously (up to two full measurements per second), thus skipping the need for calibration with highly expensive and sensitive reference panels. To enable the usage under ambient light conditions, neutral density filters are applied.
For an easy and intuitive usage of the platform, a Raspberry Pi 4b microcomputer is implemented for data collection and distribution.
The Raspberry Pi functions as a server and access point for Tisca communications. A web interface based on the R Studio Shiny Package is hosted and allows full wireless control of the spectrometer (settings, live data view, processing and data download). It is accessible via an internet browser, with no need for additional software on the user side. The Tisca UI can be used simultaneously by several users connected to the internal WiFi, rendering it optimal for use in classroom environments.
Preliminary results of the first working prototype have shown promising potential. Both wavelength reproduction and accuracy meet the requirements. Wavelength conversion factors for the individual MSP channels are provided by the manufacturer. In first tests, raw DN (digital numbers) radiances of artificial light sources and target reflectances (e.g., vegetation, rocks, soil) were successfully assessed. Currently ongoing tests involve laboratory time-series measurements of leaves in the process of pigment decomposition to gain insight into the dynamics of chlorophyll and carotenoid signatures. For intercalibration purposes, readings are compared to ASD FieldSpec-4 standard measurements.
The possible range of scientific and educational applications with Tisca is manifold. One primary goal of the Tisca platform is to assist in model development for existing and upcoming multi- and hyperspectral missions. For this reason, Tisca will feature different retrieval modes. In one mode, the user will be able to simulate a remotely sensed (RS) pixel (within the MSP wavelength range), by walking the respective pathways on the target area. Three transects would form the sampling pattern parallel to the flightpath. The averaged spectra would subsequently be aggregated to simulate pixels of a medium-resolution spaceborne sensor. With a battery life of up to 10 hours, the collection of a dense times-series with a temporal resolution of up to two measurements per second is possible. As an example, the continuous monitoring of plant pigments in the 340 nm to 850 nm domain becomes feasible under open-sky conditions.
All documentation, schematic diagrams, firmware and STL files will be made available on the authors´ GitHub following the open hardware mindset of the project. A next prototype iteration of Tisca will be presented at LPS 2022 in Bonn, with the possibility of visitors to gain hands-on experience.
References:
Laganovska, K., Zolotarjovs, A., Vazquez, M., Mc Donnell, K., Liepins, J., Ben-Yoav, H., Karitans, V., Smits, K. (2020): Portable low-cost open-source wireless spectrophotometer for fast and reliable measurements. In: HardwareX, 2020 (e00108), 1 – 12.
Salazar-Vazquez, J., Mendez-Vazquez, A. (2020): A plug-and-play Hyperspectral Imaging Sensor using low-cost equipment. In: HardwareX, 2020 (e00087), 1 – 22.
Sandak, J., Sandak, A., Zitek, A., Hintestoisser, B., Picchi, G. (2020): Development of Low-Cost Portable Spectrometers for Detection of Wood Defects. In: Sensors, 2020(20,545), 1 – 20.
Sonobe, R., Yamashita, H., Nofrizal, A., Seki, H., Morita, A., Ikka, T. (2021): Use of spectral reflectance from a compact spectrometer to assess chlorophyll content in Zizania latifolia. In: Geocarto International, 2021 (6049), 1 – 13.
Land Use Land Cover (LULC) changes induced by human or natural processes drive biogeochemistry of the earth influencing the climate. Changes in the land cover due to anthropogenic activities enhance the heat emission from land surface and atmospheric temperatures increased Land Surface Temperature (LST). Due to complexity of landscapes it was very difficult to derive LST and environmental response relationships but temporal data acquired for the entire earth surface through space borne remote sensors has provide to make the bridge between the gaps. In this study, an attempt has been made to assess spatio-temporal dynamics of land surface temperature (LST) and to establish the relationship between Land Use Land Cover Change (LULCC) & Land Surface Temperature Change (LSTC) in a part of Muscat City, Oman. The present work aims to analyze the relationship between LULC and LST, determining the influence of LULC on LST. Landsat time series data for time period of in between 1985 and 2021 have been used for this study. The land surface temperature (LST) and land use and land cover (LULC) classes were retrieved and extracted from Landsat remote sensing data. The thermal infrared bands of Landsat remote sensing data help to retrieve LST with the help of ground based measurements. Mainly, the visible (blue, green, red), NIR, SWIR bands of Landsat remote sensing data were mainly used to retrieved and extracted land use and land cover (LULC) classes. The results showing that the land surface temperatures (LST) are significantly increased in the study region for the time period between 1985 and 2021. The results also showing that there are positive relationship between Land Use Land Cover Change (LULCC) & Land Surface Temperature Change (LSTC) in the study area. The results showed that LST is significantly affected by surface type, LST varied significantly across LULC types.
Uncontrolled, unplanned, and unprecedented urbanization characterizes most African cities.
Drastic changes in the urban landscape can lead to irreversible changes to the urban thermal
environment, including changes in the spatiotemporal pattern of the land surface temperature
(LST). Studying these variations will help us take urban climate change mitigation and adaptation
measures. This study is intended to map the effects of urban blue-green landscapes on LST using geospatial techniques in Addis Ababa, Ethiopia from 2006 to 2021. Object-based image analysis
(OBIA) method was applied for land use/land cover (LULC) classification using high-resolution
imagery from SPOT 5 and Sentinel 2A satellites. Moreover, LST was retrieved from the thermal
imageries of Landsat 7 ETM + (band 6) and Landsat 8 TIRS (band 10) using the Mono-Window
Algorithm (MWA). Furthermore, linear regression analysis was used to determine the relationship
of LST with normalized difference vegetation index (NDVI), normalized difference built-up index
(NDBI), and modified normalized difference water index (MNWI). Five major LULC classes were
identified namely, built-up, vegetation, urban farmland, bare land, and water. The result shows
that the built-up area was the most dominant LULC in the city and has shown a drastic expanding
trend with an annual growth rate of 4.4% at the expense of urban farmland, vegetation, and bare
land in the last 15 years. The findings demonstrated 53.7% of urban farmland, 48.1% of vegetation,
and 59.4% of bare land, was transformed into a built-up class from 2006 to 2021. The mean LST
showed an increasing trend, from 25.8°C in 2006 to 27.2°C and 28.2°C during 2016 and 2021
respectively. It was found that LST varied among LULC classes. The highest mean LST was
observed at bare land having an average LST value of 26.9°C, 28.7°C, and 30.1°C in 2006, 2016,
and 2021 respectively. While the lowest mean LST was recorded at vegetation with average LST
values of 24.3°C in 2006 and 26.0°C in 2021; and at water with mean LST of 25.5°C in 2016. The regression analysis showed a strong negative correlation between NDVI and LST, a strong positive correlation
between NDBI and LST, and a weak negative correlation between MNDWI and LST. The findings
of this study have indicated that LULC alteration had contributed to the modification of LST in
Addis Ababa during the period. The regression analysis results further revealed that built-up area
and vegetation cover plays a decisive role in the variation of LST in the city compared to urban
surface water. The findings of this study will be helpful for urban planners and
decision-makers while planning and designing future urban blue-green innervations in the city and beyond.
The ECOSTRESS thermal radiometer on the Space Station has a 70-m pixel scale and a subdaily to 5 day revisit interval. It resolves thermal patterns at sub-pixel scales relative to the highest resolution operational SST products, and is especially useful in coastal regions with complex shorelines, where it provides a seamless skin temperature product from coastal uplands, to the intertidal zone and the coastal ocean. We validated ECOSTRESS SST and at-sensor radiances with co-located cloud-free ocean pixels from other satellites, and to in-situ observations from NOAA-iQuam and shipborne radiometers, to establish a bias correction for use in coastal regions. We examined spatial variation in SST at different tide stages on tidal flats in Mont Saint Michel Bay, France (tidal range 10m), Arcachon, France (tidal range 5m) and in Galicia, NW Spain (tidal range 3m). ECOSTRESS resolves the position of the water line at all stages of the tidal cycle. This allows three important determinations (1) quantification of surface temperature changes during flood and ebb, (2) quantification of the degree of tidally dependent land contamination of operational SST product pixels, and (3) quantification of thermal stress during aerial exposure of intertidal surfaces. The high-resolution surface temperature observations from ECOSTRESS are at the spatial scale of commercial intertidal aquaculture of mussels, oysters, and clams, all of which suffer mass mortality during heat waves. Since ECOSTRESS can resolve temperature differences at hectare spatial scales, it provides a means for predicting differences in shellfish harvest and mortality among individual aquaculture plots, in a manner similar to the between field differences in thermal stress it can provide for terrestrial agriculture. It also provides a preview of the opportunities afforded by new missions like TRISHNA beginning in late 2024.
The work presented here aims to further the development of high-resolution products: Urban Heat Islands (UHI), Thermal Discomfort (TD) and raw Land Surface Temperature (LST) for the use in primarily the urban environment but also for green space assessment. Current products have the fundamental proof-of-concepts established but require further efforts to create viable services for use by user groups and decision makers.
A key objective was to provide the necessary robustness in urban heat products from satellite for both the scientific and commercial users. In this work, a refinement of the University of Leicester Optimal Estimation (OE) retrieval algorithm, used and refined in the ESA TIR-TRP study, has been adapted to the ASTER and LANDSAT 8 satellite data records. This application methodology provides not only a long data record achievable from Landsat providing urban planners, and the health sector as examples, the necessary underpinning data to enable justifications for proposed changes for future mitigation and adaption to be evidenced, but also leads into future missions such as LSTM. Building on the methods and user interactions through the DUSTI project enables the construction of a framework within which LSTM and other high-spatial resolution thermal infra-red satellite missions can provide operational and targeted resources to end-users.
Wildfires in the western United States produce significant social and economic impacts, and the fire season has been observed to shift earlier, and increase in frequency and severity with climate change. Wildfire burn severity is influenced by the availability of fuels (vegetation) to burn, as well as the flammability of the fuels which in turn is impacted by environmental stressors such as drought. Understanding the role that antecedent vegetation water stress has on the spatial pattern of burn severity is therefore of importance for enhancing the predictability and monitoring of wildfires. The ECOsystem and Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS) was launched in 2018 and provides high spatial (70m) and temporal (1 to 5 day revisit) information on vegetation water stress including evapotranspiration, evaporative stress index and plant water use efficiency. I discuss using ECOSTRESS data to characterize vegetation water stress preceding four major wildfires which occurred in the Southern California Geographic Area Coordination Center (GACC) in 2020. We use both long-term (annual) and short-term (growing season) hydrological indicators of vegetation stress in the year preceding the fire, as well as information on topography, and employ these in a random forest modeling approach to predict the spatial patterns of burn severity. We find that burn severity predictability is enhanced in regions with more severe topography (high elevation and steeper slopes). We also find differences in the relationships between the vegetation water stress and burn severity depending on vegetation type. Burn severity for evergreen needleleaf forests is mostly explained by vegetation water stress in the preceding year, whereas for grasslands, less stressed plants and higher values of evapotranspiration are the most important predictors, consistent with the notion that enhanced grass growth increases fuel amount. Our results indicate the potential for predicting the spatial variability of wildfire burn severity using high resolution remote sensing of vegetation water stress, and provide a framework for the application of the upcoming Surface Biology and Geology (SBG) mission to wildfire monitoring.
Land surface temperature (LST), latent and sensible heat fluxes are a strong indicator of warming climate trends and are affected by rising greenhouse gases (GHGs) and influence Earth’s weather and climate patterns. This is predominantly through the reduction of energy exiting Earth’s atmosphere, resulting in an increased energy budget. Key objectives for the UN Framework Convention on Climate Change (UNFCCC) investigates how Earth observations from Space could support the UNFCCC and the Paris Agreement in closing Earth’s energy budget imbalance. Improving global LST observations from satellite data in aid of improving climate warming predictions is crucial to fulfilling this.
All-sky LST observations are required and crucial for many climate applications. Clear-sky bias is a key problem with infrared observations and is a challenge for climate science. While the lower accuracy and spatial resolution of microwave (MW) LSTs can also be an issue, particularly as observations are required at increasingly higher resolution for model simulations. For these applications, a combination of both IR and MW LSTs remains a key step forward. We will use information on the differences between the validation and the inter-comparison activity products to correct the least accurate LST product. We will also use the relationship between LST and land surface air temperature (LSAT) to improve our understanding of the clear-sky bias.
Through a PhD project within the National Centre of Earth Observation (NCEO) and interfacing with the ESA Climate Change Initiative Land Surface Temperature project, we aim to better understand the diurnal variability in global LST. This will be achieved by creating the first fully integrated all-weather LST dataset that can be utilised against climate models and other temperature datasets. Here I will show some first results of understanding the merging of these LST data.
TRISHNA is an Indian-French high spatio-temporal resolution satellite which will provide users with global surface temperature measurements at local scale, for better monitoring of the water cycle.
The TRISHNA satellite embarks both an innovative multi-channel thermal infrared instrument and a visible and short-wave infrared instrument, that will scan the entire earth surface every 3 days. TRISHNA scientific objectives are linked to ecosystem stress and water use (better management of water resources), coastal and inland waters (water quality, fish resource, sea ice), urban microclimates monitoring (characterization of urban heat island), solid earth (detection of thermal anomalies), cryosphere (monitoring of snow and ice) and atmosphere (water content, cloud characterization).
The radiance in the TIR atmospheric window is dependent on the temperature and emissivity of the surface being observed. The retrieval of surface temperature (T_s) and emissivity from multispectral measurements is a non-deterministic process. Indeed, the total number of measurements available (N bands) is always less than the number of variables to be solved for (emissivity in N bands and one surface temperature).
The Temperature Emissivity Separation (TES) algorithm was initially developed by the ASTER Temperature Emissivity Working Group (TEWG) in order to efficiently tackle the issue of surface temperature/emissivity separation. The TES algorithm is a hybrid algorithm that capitalized on the strengths of previous algorithms, especially the Normalized Emissivity Method (NEM) and Minimum- Maximum Difference (MMD) algorithm. More particularly, TES algorithm is based on the hypothesis that over N_B≥3 channels in the TIR domain, the TIR emissivity spectrum of a natural surface is composed of at least one value close to unity.
In the context of the TRISHNA mission, we propose a new version of the TES algorithm: TRISHTES. TRISHTES is based on the fact that T_s can be expressed from the theoretical surface leaving radiance for each band i, L_(s,i) obtained after atmospheric correction and that multiple TRISHTES MMD relationships can be calibrated for multiple class of observed scenes. Each of these calibrations will be associated with a dataset of emissivity which characteristics differ from the other datasets and will be applied depending on the class of the observed scene. We defined four classes: the first “greybody” class which is characterized by spectra with εmin > 0.95, a second class that regroups spectra that classically characterizes the (εmin ; MMD) relationship for natural surfaces, a third class that regroups the spectra that follows a similar repartition than the classical one, but with lower εmin and a fourth class which contains the manmade spectra. Operationally, a first guess of the emissivity using TRISHNA VSWIR data will be derived and uses to classify the pixel into the four classes defined.
The TRISHTES algorithm is currently considered to become the operational algorithm for TRISHNA temperature and emissivity retrievals. TRISHTES allows an increase of performance compared to original TES method than can go up to a factor 2, and improve the performances of TES on graybodies, which include surfaces of interest for the TRISHNA mission such as dense vegetation and water bodies. Moreover, results show that performances on the retrievals are better than other methods (such as split windows methods) because of TRISHTES lower sensitivity to the uncertainties on the algorithm initialization.
Generation of long-term Land Surface Temperature (LST) series from Thermal InfraRed (TIR) sensors on board polar orbiting or geostationary satellites has usually been based on the application of Split-Window (SW) techniques. SW algorithms over land also require as input a correction for the surface emissivity, usually estimated from vegetation indices or classification-based approaches using Visible and Near-Infrared (VNIR) bands. These algorithms have systematically been used because of the dominant spectral configuration of most low-resolution Earth Observation sensors, with only two bands in the 10.5-12 µm spectral region.
However, LST retrieval from SW algorithms and surface emissivity estimations from classifications and/or vegetation indices may be problematic in some landscapes because emissivity of land surfaces is heterogeneous and is dependent on many factors such as soil moisture and surface compositional changes which are not characterized by land cover maps. A reduction in LST uncertainty due to improved emissivity knowledge could be beneficial for long time series data if the accuracy of the joint retrieval of temperature and emissivity could be verified.
In the framework of ESA LST Climate Change Initiative (CCI) project we propose the application Temperature and Emissivity Separation (TES) method that combine TIR data in different spectral bands, providing both LST and Land Surface Emissivity (LSE) by solving the radiative transfer equation and thus reflecting the real conditions of the surface. The work proposed here is complementary to other efforts currently taken by other entities such as NASA. To do so, the whole Moderate Resolution Imaging Spectroradiometer (MODIS) database will be processed applying the TES algorithm in order to generate LST and LSE Essential Climate Variable (ECV) products which can be useful for global trends and for local-scale LST climate applications, such as urban areas, agricultural land, or semi-arid areas. An additional benefit of the computed LSE product is the retrieval of global maps which can be used as input in the classic SW algorithms for generation of long-term LST series.
Finally, these retrievals will be compared to the classical SW approaches and also validated using in situ measurements to assess their feasibility and performance. Other global TES products will be also included in the comparison.
Radiometric surface temperature (Tr) obtained from thermal infrared (TIR) remote sensing is routinely used as a surrogate for aerodynamic temperature (T0) in surface energy balance (SEB) models used for mapping evaporation (E) and sensible heat (H) fluxes. However, the relationship between the two temperatures is both non-unique and poorly understood. While Tr corresponds to a weighted soil and canopy temperature as a function of radiometer view angle, T0 represents an extrapolated air temperature profile at an ‘effective depth’ within the canopy at which the sensible heat flux (H) arises. This depth is often referred to as the ‘source-sink’ height of the canopy, and at this point Tr and T0 can differ by several degrees. As a result, using them interchangeably could lead to large errors in evaporation flux estimates, particularly in arid and semiarid climates. The most common approaches adopted in the SEB models to accommodate the inequality between Tr and T0, such as the ‘kB-1- excess resistance’ approach used in one-source models and contrasting empirical parameterizations of aerodynamic conductance in two-source models to segregate the soil-canopy component temperatures very often questions their theoretical soundness.
The present study uses the analytical evaporation model, STIC (Surface Temperature Initiated Closure), to demonstrate a direct retrieval method for T0 that enables an investigation of the aerodynamic versus radiometric surface temperature paradox for a broad spectrum of ecohydrological regimes. T0 retrieval through STIC forced with Tr from ESA CCI+ land surface temperature products and in-situ meteorological datasets, were evaluated against an inverted T0 retrieved from direct flux observations in water-limited (arid and semi-arid) and energy-limited (mesic) ecosystems in Australia from 2011 to 2018.
Comparison of STIC T0 versus inverted T0 revealed a significant positive association (correlation coefficient, r = 0.76 - 0.88, p-value < 0.05) with a heteroscedastic pattern in the semiarid ecosystems, and the differences between the two consistently increased with increasing H. The difference between STIC-derived T0 and inverted T0 was significantly correlated with the product of wind speed and surface air temperature difference in the arid and semiarid ecosystems (r = 0.40 - 0.50, p-value < 0.05). This implies that assuming a constant kB-1 does not adequately capture the expected variation in flux-inverted T0. Arid and semiarid ecosystems with declining canopy-stomatal conductance and evaporative fraction response to increasing atmospheric vapor pressure deficit (VPD) led to an increase in sensible heat flux and simultaneous increase in aerodynamic conductance and air temperature (Ta). Any strong vegetation-atmospheric coupling due to high aerodynamic conductance restricts the Tr-Ta difference, which is compensated through increasing T0, thus increasing the Tr-T0 differences. Our study indicated the possible existence of biophysical homoeostasis depending on the canopy-stomatal conductance response to VPD and vegetation characteristics, and the inequality of T0 versus Tr is thought to be evolved largely as a consequence of homoeostasis for a given fractional canopy cover. The reshaping of the Tr-Ta difference due to the surface temperature homoeostasis is a thermoregulation mechanism of vegetation for surviving in water-scarce environments.
A method of detection of the volcanic ash from C band from Sentinel-1 satellite data.
Among the natural hazards, volcanoes represent one of the most dangerous for both people and the surrounding environment. Hundreds of eruptions are recorded each year, often putting people in serious risk and causing enormous economic and environmental damages. Mt.Sakurajima is active volcano in Japan. The main aim of this study is to detection of the volcanic ash spatial pattern distribution, in addition, to study the relationship between the existing ash around volcanic area using the spectral indicators Normalized Difference Vegetation index (NDVI) and Land Surface Temperature (LST) based on Landsat 8 satellite data in Mt.Sakurajima volcano of Japan. A technique for improved detection of volcanic ash has been developed that uses utilizes the coherence of the interferometric pairs of C-band from Sentinel-1, NDVI and LST from Landsat-8 satellite. In addition, investigated the multi-temporal approach in order to accurately map the volcanic ash wind can caused the decorrelation. We have statistically analyzed the temporal behavior of coherence and identify the anomaly. Results are encouraging for the future development of a new empirical model, in combination with data from forecasting models.
The passive microwave can be used to gather information on the atmosphere and the Earth surface that have proved to be very valuable, especially under cloudy skies.
These observations are assimilated as part of global numerical weather prediction systems, to constrain surface geophysical parameters or estimate atmospheric profiles. They can also be used to directly estimate various parameters such as land surface temperature, especially for 'all-weather' estimates.
In these different usages, a contribution from the surface has to be taken into account in the radiative transfer equation.
In most situations where the observations are performed above moist soils, the layer contributing to the microwave signal can be considered to be a skin layer, comparable to the one seen by infrared instruments in clear sky.
However, in some arid areas, the radiation penetration depth can be larger than a wavelength. Indeed, the attenuation of the microwave radiation described by the soil dielectric properties can be very low, especially at lower frequencies. Therefore, the emitting layer of the microwave signal can be deeper in the sub-surface.
The diurnal variation of land surface temperature is propagated by conduction in the sub-surface. This heat propagation is described by the Fourier diffusion equation and leads to difference up to 20 K between the surface and the emitting depth layer temperature.
This discrepancy has been noticed by different studies comparing the land surface temperature estimated by microwave and the ones estimated by infrared observations.
The Global Precipitation Mission Microwave Imager (GMI) is a passive microwave imager with channels between 10 and 183 GHz. Its main difference with other microwave imager is the non-sun synchronous orbit, that provides measurements of the Earth surface at all time of day. Multiple observations can be combined to create diurnal cycles of brightness temperatures.
A method combining the observed microwave brightness temperatures, the soil temperature profile and the atmospheric contribution can be used to simultaneously estimate the emissivities in both polarizations and the emitting depth at different frequencies. The modelisation of the soil temperature profile relies on a prescribed land surface temperature diurnal cycle, and the atmospheric contributions are derived from temperature and humidity atmospheric profiles, both based on the ERA5 reanalysis data. These are collocated with the diurnal cycle of brightness temperatures observed by the GMI instrument over its lifespan (2015-2021).
Using this dataset, monthly estimates of the emitting depth and the emissivities have been obtained over arid areas on a global scale.
These estimations of the emitting depth at frequencies between 10 and 89 GHz can be used to build dielectric properties maps of arid areas, providing new insights on the spatial distribution of some geological features such as sandy areas. These results can be used to derive a correction of the difference between the emitting depth temperature and the skin temperature at any time of the day between 10 and 90 GHz for all arid areas.
This correction could be useful to make the estimates of land surface temperature based on passive microwaves more reliable over arid areas.
Forests can decrease and buffer extreme daytime surface temperature through evapotranspiration and other pathways of conversion and storage of solar energy. Buffering of extreme temperatures in both urban and rural areas is an ecosystem service, benefiting both human wellbeing and wildlife habitat suitability. However, little is known about how restoration of forests affects the rate, timing, and amount of temperature buffering.
Our study assessed the effects of forest restoration on the rate, timing, and strength of thermal buffering capacity for two groups of restoration sites in Southern Ontario, Canada. The two groups of sites were ~130 km apart and restored from agriculture towards forest by two different organizations between 2007 and 2019. We used 29 Land Surface Temperature (LST) and 9 Evapotranspiration (ET) image data products from the ECOSTRESS thermal imager captured during the 2020 growing season. ECOSTRESS is useful for monitoring restoration and conservation projects with higher temporal frequency and variation than what is available from Landsat satellites. Many of these sites and projects are also too small and fragmented for the spatial resolution of MODIS and Sentinel thermal imagers. We compared restoration sites (total n = 43) with paired mature forest sites (n =43), representing the post-restoration state, nearby agriculture sites (n = 20), representing the pre-restoration state, and suburban residential sites (n = 8), representing a common alternative land-use for abandoned farmland. Temperature measurements for all site types were taken relative to that of the largest protected mature forest in the area.
We found that the temperature difference between all site types peaked in the early afternoon (1 – 3 pm) for both groups of sites. We found significant differences between restoration sites and agriculture and residential sites for both groups of sites. Between 12 and 4 pm, restored sites were 8.3 ± 3.8 ℃ cooler than residential sites and 5.4 ± 2.3 ℃ cooler than agricultural sites. Mature forests and restoration sites were not significantly different in both groups of sites. Temperature variability over the 24-hour diurnal day, measured as standard deviation (s) relative to a large, protected forest control site, was not significantly different for mature forests (s = 1.6 ± 0.8 ℃), and restoration sites (s = 2.7 ± 1.1 ℃). In contrast, restoration sites and mature forests did have significantly less variability in relative diurnal temperature than agriculture (s = 4.4, ± 1.2 ℃) and residential sites (s = 4.3 ± 1.3 ℃). We found that daytime temperature decreased significantly, by 0.1 ℃, or 3.1 %, per year since restoration for one of the groups of sites relative to nearby mature forest sites. We also characterized the absolute and relative ET dynamics of sites in one of the groups of sites. We found that younger restoration sites have a higher overall ET than older ones, with a significant daytime relative instantaneous ET decrease of 0.8 W/m2, or 5 %, per year for sites 1 to 14 years old.
Improving our understanding of the timing and capacity of restored forests to buffer extreme temperatures is essential to better utilize and promote forest restoration as a tool for local climate change adaptation. Creating a semi-automatic GIS tool for restoration and conservation managers to monitor and assess changes in thermal buffering at their sites, based on ECOSTRESS and other thermal imagery data, would provide another strong and easy-to grasp argument when reporting to funders, and when in public outreach.
The NASA Surface Biology and Geology (SBG) mission slated for launch in early 2028 is a core component of NASA's new Earth System Observatory to improve our understanding of vegetation processes, aquatic ecosystems, urban heat islands and public health, snow/ice, and volcanic activity. SBG will include both a visible to shortwave infrared spectrometer (VSWIR) and an Infrared radiometer including two midwave infrared (MIR: 3-5 micron) and five thermal infrared (TIR: 8-12 micron) bands. Here, we leverage the SBG bands for three key objectives: (1) To evaluate the performance of a suite of algorithms for detecting high-temperature phenomena (hotspots) such as lava flow and wildfires at a spatial resolution of 60 m. (2) To model lava/fire properties such as temperature distribution, area, and Fire/Volcano Radiative Power at a sub-pixel scale. (3) To examine how the inclusion of the 4.8 micron MIR band can improve the detection of temperatures (and the corresponding hot area fraction) within the 500-800 K range.
We approach this by modeling the at-sensor SBG radiances using the spectral response functions and instrument noise model combined with high-resolution airborne data from HyTES over two sample sites using MODTRAN. The following regions form our sample sites: (a) A small fire in Arizona with a thermal range of 400-800 K, and (b) Lava flow on Kilauea, Hawaii encompassing a thermal range of 600-1200 K. We then apply three types of algorithms to estimate sub-pixel lava/fire temperature and the corresponding fraction. First, we test the Normalized Temperature Index (NTI) method and determine the NTI detection thresholds for SBG. Second, we implement well-established dual and multi-component modeling algorithms where we solve the Planck Curve for different combinations of VSWIR, MIR, and TIR band observations to compute sub-pixel thermal components (temperatures) and their fractional areas. Third, we also test the multiple endmember spectral mixture analysis (MESMA) algorithm to determine the relative areas of predetermined thermal components. We conclude by comparing the accuracy of each algorithm in replicating the sub-pixel thermal distribution and quantifying their limitations at different noise levels.
The rate at which global climate change is happening is arguably the most pressing environmental challenge of the century and it affects our cities. Temperature is one of the most important parameters in climate monitoring and Earth Observation (EO) systems and the advances in remote sensing science increase the opportunities for monitoring the surface temperature from space.
The EO4UTEMP project examines the exploitation of EO data for monitoring the urban surface temperature (UST). Large variations in surface temperatures can be observed within a couple of hours, particularly when referring to urban surfaces. The geometric, radiative, thermal, and aerodynamic properties of the urban surface are unique and exert particularly strong control on the surface temperature. EO satellites provide excellent means for mapping the land surface temperature, but the particular properties of the urban surface and the unique urban geometry in combination with the trade-off between temporal and spatial resolution of the current satellite missions impose the development of new sophisticated surface temperature retrieval methods particularly designed for urban areas.
EO4TEMP exploits multi-temporal, multi-sensor, multi-resolution EO data for UST retrieval at local scale (100 m), capable of resolving the diurnal variation of UST and contribute to the study of the urban energy balance. In the first phase of the EO4UTEMP project implementation, information from multi-source satellite data was used to estimate parameters related to the geometric, radiative, thermal, and aerodynamic properties of the urban surface. Very high spatial resolution imagery (SPOT5) was used for deriving static land cover fractions. Very high resolution Digital Surface Models (DSMs) were used to derive the 3D (3 dimensional) city information. Parameters like the sky-view factor, the canyon aspect ratio, the plan are and the frontal area index were derived. The impact of those parameters to UST was assessed using time series LST (land surface temperature) and emissivity products from ECOSTRESS. The findings from the EO4UTEMP project will be used to improve the emissivity estimation for accurate UST estimations from high spatial resolution missions. Downscaling approaches will be then applied to retrieve accurate UST from low spatial resolution missions to achieve high spatio-temporal UST.
Land Surface Temperature (LST) is a parameter related to multiple Earth surface processes. For example, some of the most well-known applications are used on vegetation studies aiming to understand the role of LST in the evapotranspiration process; or on studies related with surface temperatures of the oceans (Sea Surface Temperature – SST) trying to comprehend the energetic exchanges between oceans and atmosphere and its impact on the climates. This parameter is often analyzed through ground data but can be also observed by means of remote sensing data. The LST monitoring by remote sensing data is also of great importance in the cryosphere field, as it allows us to better understand the energy exchanges between the atmosphere and the snow/glaciated surfaces of Earth on a larger scale. Snow surfaces have a relevant role in the global energy balance of the planet since due to its bright color it reflects a large extent of the incident radiation to the atmosphere, thus avoiding the fast melting of the mountain glaciers, seasonal snow, polar ice caps and sea ice.
Snow processes and metamorphosis are very sensitive to air temperature changes. A variation from 0° to 1° C can trigger the beginning of the snow melt. During this melting process snowflakes undertake changes in grain size and shape and thus the capacity for reflecting the incident radiation, which means it changes the albedo. In this sense, studying the relation between LST and snow grain size help us to better understand in which way the variation of these two parameters is correlated.
As many studies in the past have demonstrated, snow albedo is a very relevant parameter for many earth processes, as Earth energy balance. At the hydrological basin level, it can influence the conditions and timing on which snow releases fresh liquid water during melting season. Thus, its accurate knowledge is extremely important to better understand many subsystems depending on the seasonal snow cycle as the vegetation, fauna but also many economic sectors such as hydropower and agriculture.
Within the frame of the ESA Alpine Regional Initiative project AlpSnow (2020-2022), we aim at developing snow albedo and snow grain size retrieval methods using two different approaches proposed by Painter et al. (2009) and Kokhanovsky et al. (2019). The first method is an empirical approach based on spectral indices, and the second method is a physical approach. Both approaches are applied to Sentinel-3 OLCI satellite data. To test both algorithms, a short timeseries has been analyzed from the beginning of the 2018 hydrological year until the melting season of 2021. For grain size, the results from the comparison between ground data and satellite estimates indicate a high representativeness of the class with low grain size values. This is especially evident in the months January and February. In these months, the in-situ measurements also show large grain size in exceptional dry snow conditions. Indeed, it is known that the snow temperature gradient can change shape and grain size where mass transfer from warmer to colder grains causing grain growth, typically forming faceted and surface hoar grains (Colbeck, 1983; 1989). In March, the snow grain sizes are quite variable, without any clear trends, while in April, satellite estimates show a high percentage (around 85%) of high values of grain size. To further assess the behavior of snow grain size, the satellite estimates were compared with LST obtained from both ground measurements (available from snow pits) and satellite imagery (MODIS and ECOSTRESS). The comparison indicates a strong relationship of the grain size evolution from winter to spring with LST changes, thus clearly revealing the aging process (as shown in Figure 1).
In this direction, LST can be seen as relevant parameter for understanding the snow grain size metamorphism (and consequently albedo changes) and due to this strong relationship as a kind of predictors in the evolution of the snowpack especially during the melting phase.
In the presentation, we will present the results obtained by the two proposed algorithms for albedo and grain size by exploiting Sentinel-3 OLCI imagery from 2018 to 2021. Moreover, we will show and discuss the correlation of grain size variability in relation with LST on both temporal and spatial scales.
References:
Colbeck, S. C. (1983): Theory of metamorphism of dry snow. J. Geophys. Res. 88, 5475–5482.
Colbeck, S. C. (1989): On the micrometeorology of surface hoar growth on snow in mountainous area. Boundary Layer Meteorol. 44, 1–12.
Kokhanovsky, A., M. Lamare, A. Danne, C. Brockmann, M. Dumont, G. Picard, L. Arnaud, V. Favier, B. Jourdain, E. Le Meur, B. Di Mauro, T. Aoki, M. Niwano, V. Rozanov, S. Korkin, S. Kipfstuhl, J. Freitag, M. Hoerhold, A. Zuhr, D. Vladimirova, A.-K. Faber, H.C. Steen-Larsen, S. Wahl, J.K. Andersen, B. Vandecrux, D. van As, K.D. Mankoff, M. Kern, E. Zege, and J.E. Box (2019): Retrieval of Snow Properties from the Sentinel-3 Ocean and Land Colour Instrument. Remote Sens., 11, DOI:10.3390/rs11192280.
Painter, T.H., K. Rittger, C. McKenzie, P. Slaughter, R.E. Davis, and J. Dozier (2009): Retrieval of subpixel snow covered area, grain size, and albedo from MODIS. Remote Sens. Environ., 113(4). DOI:10.1016/j.rse.2009.01.001.
Since Land Surface Temperature (LST) is a key variable for monitoring the Earth climate system, the World Meteorological Organization regards it as an essential climate variable. The Global Climate Observing System (GCOS) recommends an uncertainty threshold for satellite-retrieved LST of ±1 K for accuracy (i.e. systematic uncertainty) and ±1 K for precision (i.e. random uncertainty).
The Sea and Land Surface Thermal Radiometer (SLSTR) is on board the Sentinel-3A and Sentinel-3B spacecrafts, which were launched in February 2016 and April 2018, respectively. Here we propose an explicitly angular and emissivity-dependent split window algorithm (SWA) for LST retrieval from SLSTR data. The SWA coefficients were obtained using the Cloudless Land Atmosphere Radiosounding (CLAR) database and the retrieved LST and the Sentinel-3A SLSTR LST operational product (baseline collections 003 and 004) were validated over the Valencia rice paddy site against in-situ LST measurements acquired between 2016 and 2020.
Due to rice phenology, over the year the validation site changes its land cover, i.e. it exhibits three different homogeneous land cover types: flooded soil in December, January and June; bare soil in February and March; and full vegetation cover from July to September. Thus, the rice paddy site allows us to validate the proposed SLSTR SWA over three different land cover types. An LST validation station at the site continuously records radiometric measurements. The station is equipped with two SI-121 Apogee radiometers, one looking downwards and one looking upwards; the latter is required to obtain downwelling hemispherical radiance. The SI-121 radiometer measures radiance within the 8 – 14 μm spectral range and, based on manufacturer and at-laboratory calibrations, has an uncertainty of ±0.2 K over the relevant temperature range.
The proposed algorithm uses SLSTR radiances in the channels at 11 and 12 μm as well as water vapor content, which are both provided in the SLSTR Level 1 product (baseline 003). Furthermore, for this validation exercise we used known in-situ emissivities for each land cover. The validation results for the SWA LST showed an overall accuracy of -0.4 K and a precision of 1.1 K (median and robust standard deviation, respectively). For each surface, the accuracy (precision) was 0.0 K (0.6 K) for flooded soil, -0.2 K (0.9 K) for bare soil and -0.7 K (1.2 K) for full vegetation. For the same period, the operational SLSTR LST product had an overall accuracy (precision) of 1.3 K (1.3 K). Therefore, over the rice paddy site the explicit angular and emissivity-dependent SWA met the GCOS accuracy threshold of 1 K for the three land covers, while the precision threshold was met for bare soil and flooded soil. Our results agree with previous studies, e.g., Yang et al. (2020) and Zhang et al. (2019), in which LST retrieved with emissivity-dependent SWAs also performed better than the operational SLSTR LST product. However, the SWA proposed here is emissivity-dependent as well as angle-dependent; thus, the atmospheric effects for large viewing angles are better represented.
A period of exceptionally heavy rainfall across many parts of East Africa from late 2019 to early 2020, followed by above average rainfalls throughout 2020, triggered devastating floods destroying livelihoods and displacing millions of people across the region and lead to a significant rise of water levels for several East African lakes.
South Sudan is perhaps the country hardest hit, severely affected by sustained flooding for more than two years now, exacerbating an already complex humanitarian emergency with an estimated 60% of the population facing acute food insecurity. Hydrology of South Sudan is complex and several factors determine the spatial distribution and duration of flooding (or drought) in the country for any given season. The flat topography and the extensive floodplains made up of vertisol soils (virtually impervious following torrential rains), render substantial portions of the country prone to pluvial flooding. Furthermore, the White Nile and its tributaries, flowing through vast wetlands, can cause substantial riverine flooding often obscured by vegetation, which can cause under-detection of flood and wetland affected areas when conventional methods based on optical or SAR satellite data.
In support of humanitarian decision making, analysis based an innovative processing of thermal data is performed to track the flood and wetland situation since 2019 over the full country with high temporal frequency. The full archive of MODIS Aqua thermal data is processed using a pixel optimized smoothing and gap-filling to derive dekadal flood and wetland extents by employing a dynamic thresholding technique. Combined with data thermal data from Sentinel-3 for synoptical monitoring, and complimented by multi-spectral data from Sentinel-2 and Landsat-8, and SAR data from Sentinel-1, the LST based analysis enabled timely and detailed analysis at various scales. In addition, analysis of the history of seasonal flooding was carried out, highlighting the uniqueness of the current flood episode.
The increased use of land surface temperature (LST) in the assessment of energy and water transfers between Earth’s surface and atmosphere has driven the development of ever more accurate estimations of LST by satellite observations. Numerous algorithms have been developed over the years, with diverse solutions to account for land surface emissivity and atmospheric effects on the LST estimation. However, one type of atmospheric corrections still in need of improvement is regarding the effects of aerosols. Although this effect is much less significant than that of water vapor, in the case of heavy aerosol loading the atmosphere transmissivity in the region 10-12 µm (a range typically used for LST retrieval) decreases considerably, which presumably affects LST estimation.
This study serves to analyse the impact of heavy dust aerosol loading on satellite LST retrievals, by comparing SEVIRI and MODIS (MxD11 and MYD21) LST products with ERA5’s skin temperature (SKT) across the Saharan desert, where abundant seasonal dust production and transport occurs. Reanalysis usually manifest a cold daytime bias when compared to satellite observations, however, we show that the bias inverts to a marked warm bias in the studied area during summer months, in concurrence with the highest dust aerosol concentrations in ECMWF’s fourth generation atmospheric composition reanalysis (EAC4). Considering that the high dust aerosol concentrations should not impact ERA5’s SKT, this result indicates that the sensor-algorithm combinations analysed underestimate LST under conditions of heavy aerosol loading and thus need improvements regarding this atmospheric effect.
This analysis was complemented with comparisons against in situ measurements of LST in two locations in the southern region of the Saharan desert (Niamey, Niger during 2006 and Dahra, Senegal from 2009 to 2013). These provide additional evidence for the underestimation of satellite-based LSTs with higher dust aerosol loading.
Finally, detailed examination of the SEVIRI brightness temperatures used for the LST estimation reveals that the aerosol loading seems to affect the distribution of the brightness temperature differences between the 10.8 and 12 µm channels, which in turn has a significant impact on the atmospheric correction performed by the algorithms. This work was performed within the framework of EUMETSAT’s Satellite Application Facility on Land Surface Analysis (LSA-SAF) with the purpose of improving current LST retrieval methods.
This work concerns the feasibility study for a new EO multispectral space sensor, operating in the medium infrared, designed for applications on high temperatures events. The study was carried out in the framework of the ASI (Agenzia Spaziale Italiana) project SISSI (Super-resolved Imaging Spectrometer in the medium Infrared), aiming to improve the ground spatial resolution and mitigate saturation/blooming effects. The MWIR (Middle Wave Infra-Red) spectral region is crucial for several applications, ranging from biomass burning to geophysical phenomena. Multispectral observations in the MWIR are relevant for monitoring natural and anthropogenic hazards, in particular when performed at high spatial resolution. The SISSI payload is composed of 5 spectral channels in the range 3-5 µm with a GSD (Ground Sampling Distance) of about 15 m. Specifically, the channels are placed at 3.3, 3.5, 3.7, 3.9 and 4.8 µm with a FWHM (Full Width Half Maximum) in the range 100-200 nm. The SISSI study could bring significant contributions to different scientific challenges: fire front and active burning areas analysis; detection of trace gases emitted to the atmosphere by biomass burning; flaring events analysis; hot spot temperature estimation of lava flows; gas detection from volcanic summit craters; detection and retrieval of greenhouse gases (CH4 and CO2) by the exploitation of gases absorption bands at 3.3 and 4.8 µm. Moreover, since the available satellite sensors operate mainly in VNIR-SWIR (Visible and Near Infra-Red and Short Wave Infra-Red) and TIR (Thermal Infra-Red) spectral regions, SISSI payload could offer the possibility to extend the data acquisition to the MWIR spectral region. In particular, the SISSI project study wants to contribute to the following Scientific Challenges, defined in the document “ESA’s Earth Observation Science Strategy” (2015, ESA SP1329/1) in the framework of the “ESA’s Living Planet Programme” (2015, ESA SP1329/2):
Challenge A2 – Interaction between the atmosphere and Earth’s surface involving natural and anthropogenic feedback processes for water, energy and atmospheric composition;
Challenge L1 – Natural processes and human activities and their interaction on the land surface;
Challenge G1 – Physical processes associated with volcanoes, earthquakes, tsunamis and landslides in order to better assess natural hazards, volcanic thermal modelling and precursor analysis.
THERMOCITY : urban thermography from space
Abstract :
Satellite imagery is used to regularly measure the surface temperature of a city or urban area. THERMOCITY aims at studying urban heat islands (in summer) and heat loss (in winter) through the development of an urban thermography analysis tool based on satellite data to provide comprehensive information to city manager. A first phase is dedicated to the processing of thermal imagery and a second to their interpretation, with a constant involvement of the final users.
The first phase of the project involves identifying and improving recent spatial thermal data in our regions of interest, five main French metropolises: Marseille, Montpellier, Paris, Strasbourg and Toulouse. A dataset of about 10 images per city, based on ASTER and ECOSTRESS acquisitions, has been constructed. Particular attention has been paid to improve the geolocation and optimize the emissivity/temperature separation based on the specific characteristics of the urban environment. This includes establishing levels of uncertainty for all the products generated. One major problem of urban thermography from space is its limited spatial resolution for our aims. Advanced analysis techniques are therefore applied to thermal images, which are combined with higher resolution optical ones, in order to improve their definition. All the products generated in the frame of THERMOCITY are openly available through the French land data centre: THEIA.
The second phase of the project focuses on thermography data exploitation. Concerning heat losses, a dedicated tool is created to characterize thermal signatures of known buildings, while a blind search is also performed in parallel to look for unexpected heat losses. The second major subject concerns urban heat islands, which is more difficult to understand and characterise than heat losses. Two approaches are used: surface urban heat island observation thanks to the thermal images and surface and air urban heat islands modelling with an urban climate model. A cross analysis of these two types of products is carried out in order to understand their advantages, disadvantages and their potential synergy with the final objective of their relevant use for urban planning.
Water bodies, such as lakes or large reservoirs, are considered of importance in the context of global change and they are sensitive to climatological and meteorological conditions. Water temperature is one of the main parameters for determining ecological conditions, influencing chemical and biological processes within a lake. Earth observation plays an important role in assessing and monitoring the water characterization parameters like height, extent, and radiance; hence, this study will be focused on the analysis of the lake surface water temperatures (LSWT), considering the Issyk-Kul Lake (Kyrgyzstan, Central Asia), a very large (6,236 km2) and very deep (down to 668 m) lake as a study case.
Time series analysis of annual (2019/2020) variation of LSWT was also carried out exploiting Sentinel-3-SLSTR, a medium resolution sensor (1km), and the ECOsystem Spaceborne Thermal Radiometer on the International Space Station (ECOSTRESS), with a high resolution of ~70 m native resolution. Due to cloud coverage, exploitability varies from a few to tens of images per month. In addition, cloud masks for reducing LSWT values outside the range within the lake will be applied for a better representation of the lake.
Area of Interests (AOI’s) were arbitrarily defined on the lake surface roughly along the central West-East line, and in accordance with the availability of the scene for each date to highlight the potential tendency of the temperature spatial distribution. Sentinel 3 in 2019 and 2020 has shown that during winter the LSWT is relatively homogeneously distributed, while in summer there were more slightly ups and downs values for temperatures. For this tendency, the minimum temperature of ~4°C is observed in January/February, with a rapid increase starting from March to May, reaching the maximum temperature of ~23°C in August, and then dropping constantly and rapidly around 10 degrees in October.
A validation campaign in the Issyk Kul Lake was carried out from the 05th to the 8th of October 2021, using a Torrent Board carrying sensors to measure humidity, air, and skin water temperatures in the Issyk Kul Lake, where the temperatures registered during the day on those dates (10hrs – 17hrs) were generally from 13°C to 16°C. Sentinel 3 has shown a slight difference of 0.3°C against the Torrent board sensors.
ECOSTRESS, unfortunately has just provided datasets from June to December in 2019, showing the maximum temperature in August with 22°C, and 8 °C as the minimum in December. LSWT analysis will be done for ECOSTRESS in 2020 to conclude the intercomparison and observe the LSWT variability in the lake for these two years. Therefore , the first intercomparison between Sentinel 3 and ECOSTRESS has shown, after using cloud masks for each date and product, LSWT were lower at each date for ECOSTRESS, with differences from 1°C to 3°C.
Brazilian savanna, known as Cerrado, is the second-largest Biome in Brazil. The deforestation and degradation caused by the expansion of agriculture and livestock have promoted a severe loss in the biodiversity, and an increase of the fragmentation process in this area, as only about 3% of this ecosystem is fully protected in restricted conservation units. In this context, remote sensing techniques together with landscape analyses can provide insights to support the understanding of the landscape fragmentation process and how this can affect the changes in biomass over time. In this study, we are testing the hypothesis that changes in landscape metrics including aggregation metric, area and edge metric, diversity metric and shape metric, (even small changes in this metrics), can have negative impact on aboveground biomass (AGB) or carbon stock loss over time. To test this hypothesis, we selected the Rio Vermelho watershed where due to the intense historical fragmentation process the native vegetation is still facing some vegetation loss over time considering these remaining fragments. This area is composed by agriculture areas with native vegetation fragments of grasslands, savanna and forest formations of Brazilian Cerrado biome. We combined field inventory and LiDAR (light detection and ranging) data to estimate AGB and landscape metrics to analyze changes in the landscape between 2014 and 2018. The relationship between landscape metrics and AGB were evaluated using the random forest (RF) model. Our results show that the local ecosystem dominated by forests and savanna formations presented a considerable vegetation loss. Between 2014 and 2018, the average AGB loss of forest, savanna and grassland in the area reached more than 20%. Among them, the AGB of savanna is the most obvious, and the loss of biomass has reached a staggering 32%. The RF analyses showed that the landscape metrics (Mean of patch area, Coefficient of variation of patch area, Mean shape index, Mean of related circumscribing circle, Shannon’s diversity index and Shannon’s evenness index) explain about 11.07% of the changes in AGB. Mean shape index (%IncMSE = 48.1) and Shannon's evenness index (%IncMSE = 47.75) have the most apparent impact on AGB. This result shows that most of the AGB loss is occurring in the remaining native vegetation fragments. Therefore, its impact is more severely affecting the local scale dynamics as consequence of the degradation within fragments and losing the connectivity among these fragments overtime.
The mineralization of soil organic carbon (SOC) to carbon dioxide (CO2) is a key component of the carbon cycle. However, knowledge about patterns of SOC mineralization in space, time and depth is still limited. Within the project Carbon4D we aim to develop a nearly real-time monitoring of SOC mineralization in space, time and depth of the Fichtel Mountains, which is a low mountain landscape located in the south of Germany. Soil temperature and soil moisture are undoubtedly two main drivers for the mineralization of SOC. As a general pattern, the SOC mineralization increases with temperature whereas extremes of soil moisture (very wet or very dry) result in a decrease. Hence, to understand and model the spatial and temporal patterns of SOC mineralization across the landscape, we first aim here to investigate patterns of soil temperature and soil moisture. Ground truth data of soil temperature and soil moisture are obtained by 15 soil probes. The sensors measure both parameters in 10 cm increments down to one meter depth in a 30-minute resolution. To measure at several sites in the 400 km² large study area, the probes are shifted monthly. Thereby, around 300 different sites are captured. To model soil temperature and soil moisture continuously in space, time and depth, we apply machine learning algorithms, where the measurement data are related to remote sensing data, soil and topographic information as well as meteorological data. By this approach, the main drivers for soil temperature and soil moisture, as well as their patterns in all four dimensions are then identified and analysed. In this conference contribution we present and discuss first results of this modelling approach and the controlling factors of soil temperature and soil moisture. The predictions of soil temperature and soil moisture patterns will be employed in the future to model and study the patterns and controlling factors mineralisation of SOC.
Carbon captured via photosynthesis by vegetation is known as gross primary production (GPP). GPP is one of the main processes driving climate regulation as well as being an important proxy for a range of ecosystem services, including food production, fibre and fuel. It is routinely estimated at global scales using different operational algorithms combining remotely-sensed data from medium spatial resolution sensors and ancillary meteorological information. However, there is an urgent need of operational global scale GPP products at finer (30 m) spatial resolution to better resolve plant community scale dynamics. High spatial resolution satellite information requires consistent mosaics and long time series, which are often plagued by data record gaps due to cloud contamination, radiometric differences across sensors, scene overlaps, and their inherent sensor noise. In order to overcome these constraints, we have fused spectral data from Landsat and MODIS using the HIghly Scalable Temporal Adaptive Reflectance Fusion Model (HISTARFM) algorithm: the method produces monthly gap free high resolution (30 m) surface reflectance data at continental scales with associated well-calibrated data uncertainties. This allows us to carry out an uncertainty analysis considering both aleatoric uncertainty (data error) and epistemic uncertainty (model error) jointly. Combining monthly high resolution data with daily meteorological information, along with in-situ eddy covariance GPP estimates leads us to be able to create accurate and continuous high spatial resolution GPP estimates and their corresponding uncertainties over large areas (Europe, the contiguous US, and the Amazon basin) using both empirical and machine learning approaches. The processing pipeline is implemented in the Google Earth Engine to produce high resolution, long time series of continuous GPP estimates across very broad spatiotemporal scales. The methodology enables more precise carbon studies and understanding of land-atmosphere interactions, as well as the possibility of deriving other carbon, heat and energy fluxes at an unprecedented spatio-temporal resolution.
Passive microwave vegetation optical depth (VOD) has been increasingly used for global vegetation monitoring in the last decade. It has, for example, been used to monitor global changes in phenology, vegetation health, vegetation water content/iso-hydricity, and biomass in time and space.
Compared to optical-based satellite vegetation data, VOD has a higher temporal frequency. The higher revisit times of the wide-swath satellite sensors, the independence of solar illumination and the limited sensitivity to cloud cover can strongly increase the data coverage in some areas, although at the cost of higher retrieval errors and lower spatial resolutions.
Therefore, VOD has recently been used as proxy for optical-based leaf area index (LAI) in regional data assimilation studies using land surface models (LSMs). The studies showed that VOD assimilation can improve both carbon-related and water-related land surface variables, like gross primary production (GPP), evapotranspiration (ET), or root zone soil moisture.
Current studies in the literature only consider the effect of biomass or LAI on VOD. However, it is well known that VOD is mainly sensitive to absolute vegetation moisture content. In the last few years, the effect of relative vegetation moisture content variations on VOD has been coming more into focus.
We therefore assimilated VOD into the Noah-MP LSM using a novel approach that takes not only dry biomass variations, but also vegetation moisture content variations into account. This is accomplished with an empirical model of VOD as a function of dynamically simulated LAI, soil moisture, and vapor pressure deficit as an observation operator.
We evaluate this novel approach by assimilating X-band VOD from the VOD Climate Archive into the Noah-MP LSM over Europe in the years 2002 to 2010 at a 0.25° model resolution. The results are compared to an assimilation of X-band VOD using an approach from the existing literature, and to an assimilation of optical LAI from the Copernicus Global Land Service (CGLS), focusing on the improvements made in the representation of vegetation-related state variables and fluxes in the resulting dataset, especially GPP and ET.
National inventories of anthropogenic greenhouse gas emissions and removals are annual at best, uncertain, and miss essential components of the full national budgets. The assessment of the global CO2 budget by the Global Carbon Project is annual for the previous year (Friedlingstein et al. 2021) and only provides national details for fossil emissions. The global CH4 budget was analyzed at a four-years interval and extends until 2017 (Saunois et al. 2020). The first N2O budget was produced last year until 2018. In the wake of the COVID pandemic, emissions are now rebounding. Yet, green stimulus packages and enhanced pledges should deliver significant emissions reductions and enhanced carbon storage in some regions. Therefore, emissions and sinks of greenhouse gases are expected to change rapidly in the coming years with contrasting trends between countries. To effectively monitor the fulfillment of emission reduction pledges in each country, more frequent observation-based assessments of national greenhouse gas budgets are needed to support national inventories. In addition to detailed coverage of managed lands, which are surveyed by inventories, complementary knowledge of natural fluxes over unmanaged lands and the oceans is also required to unambiguously reconcile the foreseen reductions of anthropogenic emissions with the observed growth rates of greenhouse gases in the atmosphere, and assess the risk of missing climate targets, e.g. if natural sinks weaken in the future. I show in this presentation that existing systematic observations in the atmosphere, and over the ocean and land surfaces can be integrated into near real time policy relevant greenhouse gas budgets to support the UN enhanced transparency framework of the Paris Agreement. A practical roadmap will be provided as well
Increasing surface temperatures in the northern high latitudes due to climate change cause significant changes in the cryosphere. These changes are connected to changes in the biosphere, e.g. to changes in the carbon uptake and release by vegetation. It was found that a trend to earlier snow melt increased the gross primary production of boreal forest in spring during the last decades (Pulliainen et al. 2017). The current knowledge about these interactions is insufficient and uncertainties are high in model predictions on how the carbon cycle and climate feedback will change in the northern latitudes in a changing climate. The project CryoBioLinks will investigate the relationship of cryosphere variables and the carbon uptake of vegetation by using in situ and satellite observations. It will enhance and develop satellite proxies describing key variables of the vegetation carbon uptake in northern high latitudes. ESA CCI snow cover and the SMOS soil freeze and thaw product (Rautiainen et al. 2016) and Sentinel 1-SAR will be utilized to provide information on the timing of snow melt and soil thaw and freeze in spring and autumn. It has been shown that the timing of snow melt can be utilized as a proxy for the start of photosynthetic activity of boreal coniferous forest (Böttcher et al. 2014). Here, we will investigate the suitability of satellite-derived soil thaw in spring to inform on the start of the carbon uptake period. Due to the decrease of winter snow cover, especially in the southern boreal zone, soil thaw might become more relevant than the timing of snow melt for the beginning of photosynthesis in coniferous forest in future. Thus, integrating information about snow melt and soil freeze may improve the robustness of current proxies for the start of the vegetation active period. The investigation will be carried out for selected sites in the boreal zone in Finland and Canada. Eddy covariance data will be utilized to determine the seasonal cycle of photosynthesis and the seasonal and annual integrals of gross primary production. The relationship between the cryosphere variables and the seasonal cycle of photosynthesis and seasonal and annual gross primary production will be analysed at the site-level. The presentation will give an overview about the project and will show the first results for the site-level investigations for CO2 flux measurements sites in the boreal zone.
References:
Böttcher, K., Aurela, M., Kervinen, M., Markkanen, T., Mattila, O.-P., Kolari, P., Metsämäki, S., Aalto, T., Arslan, A.N., & Pulliainen, J. (2014). MODIS time-series-derived indicators for the beginning of the growing season in boreal coniferous forest — A comparison with CO2 flux measurements and phenological observations in Finland. Remote Sensing of Environment, 140, 625-638.
Pulliainen, J., Aurela, M., Laurila, T., Aalto, T., Takala, M., Salminen, M., Kulmala, M., Barr, A., Heimann, M., Lindroth, A., Laaksonen, A., Derksen, C., Mäkelä, A., Markkanen, T., Lemmetyinen, J., Susiluoto, J., Dengel, S., Mammarella, I., Tuovinen, J.-P., & Vesala, T. (2017). Early snowmelt significantly enhances boreal springtime carbon uptake. Proceedings of the National Academy of Sciences of the United States of America, 114, 11081-11086.
Rautiainen, K., Parkkinen, T., Lemmetyinen, J., Schwank, M., Wiesmann, A., Ikonen, J., Derksen, C., Davydov, S., Davydov, A., Boike, J., Langer, M., Drusch, M.T., & Pulliainen, J. (2016). SMOS prototype algorithm for detecting autumn soil freezing. Remote Sensing of Environment, SMOS special issue, 180, 346-360.
Plant primary production, defined as photosynthetic fixation of atmospheric CO2, plays a crucial role in the Earth's carbon fluxes. From local to global scale, e.g. at vegetation stands and landscapes, photosynthesis is referred to as gross primary productivity (GPP), often estimated from net CO2 exchange measurements using the eddy-covariance (EC) technique. EC measurements are the most established way for assessing GPP, however, it involves assumptions and an estimation process since its primary product is the net ecosystem production (GPP reduced by respiration).To accurately monitor and predict the Earth carbon fluxes, a precise characterization of plant photosynthetic efficiency is essential. However, photosynthesis is a very dynamic process that responds to changes in the environment in various ways on different spatial and temporal scales, from seconds to seasons and small changes in photosynthetic efficiency can have a large impact on the global carbon cycle (Rascher & Nedbal, 2006; Schurr et al., 2006; Alonso et al., 2017).
Alternatively to EC measurements, remote sensing (RS) approaches, such as reflectance-based Vegetation Indices (VIs), have been used to study photosynthesis from the assessment of light-use efficiency (LUE) and gross CO2 uptake. The success of reflectance-based vegetation indices often depends on the exact context, including spatial and temporal scales (Gamon et al., 2015, 2019). Established VIs as Normalized difference vegetation index (NDVI) and enhanced vegetation index (EVI) generally track large scale and seasonal variations in GPP, which are related to the greenness of the vegetation. However, these correlations may fall apart under conditions, where the functioning of photosynthesis is unlinked from the pure canopy greenness, such as under stress conditions or in evergreen forests during winter time. More recently developed CCI, proposed as an indicator of changing chlorophyll and carotenoid pigment ratios, seems to track seasonal changes in photosynthetic activity (Gamon et al. 2016). Furthermore, Near-infrared reflectance of terrestrial vegetation index (NIRvref) was proposed as a suitable proxy for global GPP estimates based on MODIS reflectance data (Badgley et al. 2017). NIRvrad (using NDVI times radiance instead of reflectance at 800 nm) used in subsequent studies was shown to hold the NIRvrad-GPP relationship under drought conditions (Badgley et al, 2019).
Yet, another option to track GPP has been investigated intensively in the last decades: The red and far-red fluorescence signals (SIF687 and SIF760), closely related to the efficiency with which light energy is used in the first steps of photosynthesis (the so-called ‘light reactions’). They have been proposed to offer potential improvements over reflectance-based approaches (Ac et al., 2015; Campbell et al., 2019), which is one of the basis for the upcoming FLEX satellite mission, which will become Earth Explorer 8 (Drusch et al., 2017; Mohammed et al., 2019).
In this study we investigated data from a winter wheat field located in the western part of Germany. The study site is mainly dominated by agricultural fields and intensively monitored being part of TERENO (https://www.tereno.net) and a class 1 site within the European ICOS infrastructure (www.icos-cp.eu). The field was equipped with an EC station and D-FloX device providing meteorological data, fluxes as well as hyperspectral reflectance and radiance data, respectively. The fluorescence-based metrics used in this study are SIF760, SIF687, and SIFTOT (derived from the integral under the curve) retrieved by spectral fitting method. These SIF products are furthermore normalized (SIFnorm) with incident radiation between 400 and 700 nm (PAR) to approximate SIF yield. We focus on two measurement campaigns in the 2018 growing season: i) the elongation period of winter wheat from May 9 to 27, when the canopy is green and closed, plants still elongate and fruit set occurs. ii) A period towards the end of the growing season (June 29 to July 1) when the canopy is getting senescent and visibly turns from green to brown.
With this study, we show that reflectance-based VIs were useful to track the greenness on larger temporal scales, but does not depict changes in photosynthesis on sub-diurnal scale. NIRvrad, incorporating PAR, followed better diurnal GPP dynamics than reflectance-based VIs. Both metrics, SIF and NIRvrad provide better representations of the diurnal dynamics in GPP in the closed green canopy than reflectance-based measures. We found, that in the phase of ear emergence, until mid-May hourly GPP values increase from 30 to 50 µmol·m-2·s-1 and then drop again to about 30µmol with fluctuations on diurnal and sub-diurnal scale. This trend is not followed explicitly by any of the calculated RS parameters. Reflectance-based VIs remain stable without meaningful changes at the daily and the sub-daily scale during this period. NIRvrad and the SIF products show variations on diurnal and sub-diurnal scale, with more fluctuations than GPP. Furthermore we found, SIFnorm to show a tendency to decrease during the whole period. On May 9, a clear sky day at the beginning of this period, GPP shows a diurnal course with peak around noon, followed most closely by NIRvrad and SIFTOT. In the second investigated period towards the end of the growing season, GPP continuously decreases from day to day and over the days. The reflectance-based measures follow senescence as a larger seasonal pattern with a steady decrease. NIRvrad shows a slight decrease during this period and still varies on (sub-) diurnal scales. SIF760 and SIF687 signals are already very low (>0.4 mWatt·m-2·sr-1·nm-1). On June 29, a clear sky day, GPP decreases over the day, while reflectance-based VIs show slight diurnal changes but no decrease as GPP. NIRvrad still shows a diurnal course, while SIF760 and SIF687 are below 0.5 mWatt·m-2·sr-1·nm-1.
Our results demonstrate, that for the investigated wheat field, each of these metrics offer different and complementary information. NIRvrad, incorporating PAR, was shown to do better than reflectance-based VIs generally following diurnal GPP dynamics as it tracks more closely the absorbed photosynthetic active radiation and thus better represents actual photosynthetic efficiency. Furthermore it might be able to sense canopy-structural effects. However, it is very complex to untangle canopy structure and how it affects the radiation. Although SIF is highly sensitive to physiological-structural interactions, it is the most direct measure of the photosynthetic activity and thus a valuable indicator for the dynamic changes in plant physiology and cannot be replaced by NIRvrad. Although SIF measured at canopy scale is partly comprised by the plant structure, it additionally provides information on plant physiology and thus helps to understand seasonal GPP patterns. Based on our findings, we suggest the joint use of optical RS parameters, namely reflectance- and radiance-based VIs and SIF to improve current estimates of GPP from sub-diurnal to seasonal scale. In combination they probably help to provide better estimates of actual vegetation function on ecosystem scale as this yields additional insights compared to when they are used alone. Further work is in progress to include them in a full (LUE) model to describe the observations in GPP as a proxy for carbon fixation and to improve forward models of GPP.
Ač, Alexander, et al. "Meta-analysis assessing potential of steady-state chlorophyll fluorescence for remote sensing detection of plant water, temperature and nitrogen stress." Remote sensing of environment 168 (2015): 420-436.
Alonso, Luis, et al. "Diurnal cycle relationships between passive fluorescence, PRI and NPQ of vegetation in a controlled stress experiment." Remote Sensing 9.8 (2017): 770.
Campbell, Petya KE, et al. "Diurnal and seasonal variations in chlorophyll fluorescence associated with photosynthesis at leaf and canopy scales." Remote Sensing 11.5 (2019): 488.
Badgley, Grayson, Christopher B. Field, and Joseph A. Berry. "Canopy near-infrared reflectance and terrestrial photosynthesis." Science advances 3.3 (2017): e1602244.
Badgley, Grayson, et al. "Terrestrial gross primary production: Using NIRV to scale from site to globe." Global change biology 25.11 (2019): 3731-3740.
Drusch, Matthias, et al. "The fluorescence explorer mission concept—ESA’s earth explorer 8." IEEE Transactions on Geoscience and Remote Sensing 55.3 (2016): 1273-1284.
Gamon, J. A. "Optical sampling of the flux tower footprint." Biogeosciences Discussions 12.6 (2015).
Gamon, John A., et al. "A remotely sensed pigment index reveals photosynthetic phenology in evergreen conifers." Proceedings of the National Academy of Sciences 113.46 (2016): 13087-13092.
Gamon, J. A., et al. "Assessing vegetation function with imaging spectroscopy." Surveys in Geophysics 40.3 (2019): 489-513.
Mohammed, Gina H., et al. "Remote sensing of solar-induced chlorophyll fluorescence (SIF) in vegetation: 50 years of progress." Remote sensing of environment 231 (2019): 111177.
Rascher, Uwe, and Ladislav Nedbal. "Dynamics of photosynthesis in fluctuating light." Current opinion in plant biology 9.6 (2006): 671-678.
Schurr, U., A. Walter, and U. Rascher. "Functional dynamics of plant growth and photosynthesis–from steady‐state to dynamics–from homogeneity to heterogeneity." Plant, Cell & Environment 29.3 (2006): 340-352.
Fires affect ecosystems, global vegetation distribution, atmospheric composition, and human-built infrastructure. The climatic, socio-economic, and environmental factors, which affect global fire activity, are not well understood, and thus their contribution is parameterized in global process-based vegetation models. Fire's climatic and ecological characteristics have been successfully identified using data-driven modeling approaches such as machine learning models; however, socio-economic factors at a global scale have not been explored in detail. Humans alter fire activity by different means, e.g., by acting as a source of ignition, fire suppression, and changing fuel availability and structure. These factors cannot easily be integrated into process-based vegetation models. Data-driven models can thus characterize these factors in time and space, enabling their better representation in process-based models. We created an ensemble of random forest models to test several socio-economic variables' importance in predicting fire ignition occurrences on a global scale, starting with a baseline model characterizing climate and vegetation, then training subsequent interactions with a single socio-economic variable (e.g., population density, Gross Domestic Product, and Distance to population centers).
Our models successfully capture the seasonality and spatial distribution of fire hotspots. High ignition occurrence across Sub-Saharan Africa positively influences the models' ability to predict fires in regions with seasonal ignition occurrence. The models, in general, reduce bias in ignition predictions compared to observations when a socio-economic variable known to influence fire ignitions is added to the base model. Our models also demonstrate the importance of specific variables in reducing bias in annual ignition sums between the baseline model predictions and observations, e.g., over Sierra Leone and most of Kenya, population and livestock density reduce bias in annual ignition sums. We also show the power of our models to reproduce fire occurrence seasonality, even over regions where observations of fire ignitions are rare. Finally, we discuss how using data-driven modeling and multiple socio-economic variables can help inform the development of process-based vegetation models.
The CO2 sink associated with Gross Primary Production (GPP) fluxes during photosynthesis is an important component of the global carbon cycle that is influenced by a variety of factors on a wide range of time scales from hourly, daily, seasonal to annual. In this study, we present an assessment of the impact of changing the vegetation state (Leaf Area Index, LAI), climate conditions (e.g. radiation, temperature, humidity, soil moisture) as well as land use/land cover (LULC) on GPP using Earth Observation (EO) datasets and a new photosynthesis model recently implemented in the “Ecland” land surface model. Ecland is part of the Integrated Forecasting System (IFS) at the European Centre for Medium-Range Weather Forecasts (ECMWF) and its photosynthesis model is used operationally in the Copernicus Atmosphere Monitoring Service (CAMS) CO2 analyses and forecasts. The new photosynthesis model is based on the Farquhar, von Caemmerer and Berry model (for C3 plants) that will enable the simulation of Solar Induced Fluorescence (SIF) by the vegetation (through a specific observation operator) for the assimilation of satellite-based SIF data. Compared to the current operational A-gs photosynthesis model, it produces an improved seasonal cycle of GPP with respect to FLUXNET eddy covariance observations. As well as the improved representation of underlying photosynthesis processes, the GPP from cland relies on accurate LULC and LAI maps to upscale the fluxes from the leaf to the vegetation canopy at global scale. This study explores the sensitivity of the GPP to new satellite-based LAI and LULC datasets with an ensemble of simulations. The simulations are performed at 25km and 9km resolutions with annually varying and climatological LAI from the Copernicus Global Land Service (CGLS), fixed and annually varying LULC maps derived from the ESA-CCI land cover products, as well as fixed and annually varying climate forcing from ERA5 re-analysis. The impact of LAI and LULC satellite-based datasets on GPP is evaluated using TROPOSIF data from TROPOMI on board of Sentinel-5P. TROPOMI offers an unprecedented resolution and spatial coverage allowing a detailed assessment of the GPP spatio-temporal variability. Specific emphasis is placed on GPP hotspots such as croplands and forests to assess the strengths and limitations of ecland in preparation for future assimilation of SIF observations in the global CO2 Monitoring and Verification system, currently being developed within the CoCO2 project and CAMS.
Temporally and spatially irregular observations of forest variables through forest inventories imply that the knowledge of the terrestrial carbon (C) cycle is limited. Satellite remote sensing data can provide supplementary observations but cannot achieve the same level of accuracy because it cannot provide a quantitative measure of the organic mass stored in vegetation. In contrast to approaches that attempt the estimation of forest variables with remote sensing data acquired at high-resolution, recent activities started exploring the contribution of coarse resolution observations, which were originally designed for other types of monitoring (e.g., wind speed and direction, soil moisture, ocean salinity, sea ice concentration). Data acquired by missions operating a coarse resolution sensor are appealing in the context of assessing the terrestrial carbon cycle because of global and repeated coverages of all terrestrial surfaces since the late 1970s. In addition, such missions are guaranteed in future decades, which eventually leads to the longest data record of observations of the Earth from space.
The record of backscatter observations collected by the European Remote Sensing Wind Scatterometer (ERS WindScat) and the MetOp Advanced Scatterometer (ASCAT), both operating at C-band (wavelength of 6 cm) is one of the longest available. An almost unbroken time series of backscatter observations at 0.25° spatial resolution exists since 1991 and data continuity is guaranteed in the next decades. Being this the only active microwave dataset available for the 1990s, the scatterometer time series has a unique value to track carbon dynamics in regions with poor coverage from equivalent optical sensors due to persistent cloud cover (tropics) or unfavourable solar illumination (boreal)
Despite the well-known weak sensitivity of C-band backscatter to AGB, reliable wall-to-wall estimates of AGB were derived from high-resolution SAR observations by exploiting multiple observations acquired in a relatively short time interval (Santoro et al., 2011; Santoro et al., 2015). This approach was recently extended to C-band scatterometer data (Santoro et al., submitted) and yielded global estimates of AGB comparable to averages obtained from plot inventory data or LiDAR-based maps of AGB. The uncertainty of our AGB estimates was between 30% and 40% of the estimated value at the pixel level, this being a relevant aspect in the context of an accurate estimation of carbon stocks and changes. In our presentation, we will introduce the AGB retrieval method and discuss strengths and limitations of the AGB estimates.
Starting in 1992, we have now generated almost 30 years of AGB estimates with a spatial resolution of 0.25°. The temporal patterns of AGB match most spatial patterns of canopy cover described in the MEaSUREs Vegetation Continuous Fields (VCF) dataset (Song et al., 2018). Our estimates indicate a constant increase of AGB in most boreal and temperate forests of the northern hemisphere except for regions characterized by disturbances where severe losses in the 1990s have only recently been compensated for. Severe loss of biomass following massive deforestation was identified throughout the wet tropics during the 1990s and the beginning of the 2000 decade. Since the late 2000s, AGB appears to have recovered but without further increments in the most recent years. coming more recently into saturation. Mostly due to the strong increase of biomass in temperate regions, the global AGB density was estimated to have increased by 9% from 71.8 Mg ha-1 Pg in the 1990s to 78.1 Mg ha-1 in the 2010s. Accordingly the AGB stocks in forests decreased slightly from 566 Pg in the 1990s to 560 Pg in the 2000s, then increased to 593 Pg in the 2010s, resulting in an almost 5% net increase during the last three decades. These results will be reviewed in our presentation, and we will give some first insights on the evolution of the terrestrial biomass pool since the start of the COVID-19 pandemic based on the most recent data acquired by ASCAT in 2020 and 2021.
References
Santoro, M., Beer, C., Cartus, O., Schmullius, C., Shvidenko, A., McCallum, I., Wegmüller, U., Wiesmann, A., 2011. Retrieval of growing stock volume in boreal forest using hyper-temporal series of Envisat ASAR ScanSAR backscatter measurements. Remote Sensing of Environment 115, 490–507. https://doi.org/10.1016/j.rse.2010.09.018
Santoro, M., Beaudoin, A., Beer, C., Cartus, O., Fransson, J.E.S., Hall, R.J., Pathe, C., Schmullius, C., Schepaschenko, D., Shvidenko, A., Thurner, M., Wegmüller, U., 2015. Forest growing stock volume of the northern hemisphere: Spatially explicit estimates for 2010 derived from Envisat ASAR. Remote Sensing of Environment 168, 316–334. https://doi.org/10.1016/j.rse.2015.07.005
Santoro, M., Cartus, O., Wegmüller, U., Besnard, S., Carvalhais, N., Araza, A., Herold, M., Liang, J., Cavlovic, J., Engdahl, M.E., submitted. Estimation of above-ground biomass from spaceborne C-band scatterometer observations and LiDAR metrics of vegetation structure. Remote Sensing of Environment.
Song, X.-P., Hansen, M.C., Stehman, S.V., Potapov, P.V., Tyukavina, A., Vermote, E.F., Townshend, J.R., 2018. Global land change from 1982 to 2016. Nature 560, 639–643. https://doi.org/10.1038/s41586-018-0411-9
Vegetation chlorophyll fluorescence retrieval approaches from tower, airborne and satellites platforms are becoming mature and the signal is now commonly used to improve our understanding of the terrestrial carbon cycle. The solar-induced vegetation fluorescence, emitted by the Chlorophyll a molecules as a small radiative flux in the 650–850-nm range is, hence, providing new quantitative information in the understanding of vegetation status from the leaf to the landscape and global scales. The final goal is to use the canopy-leaving fluorescence signal as an unbiased estimate of the photosynthetic activity of the underlying vegetation. However, the correct understanding of the canopy-leaving chlorophyll fluorescence signal, which is small compared to the reflected solar radiation, is not so straightforward.
As part of the FLEX L1B-to-L2 Algorithm Retrieval and Product Development Study, retrieval strategies for photosynthesis-related products are being developed based on the synergistic FLEX–FLORIS and Sentinel 3–OLCI spectral information. Current algorithm developments in the context of the mission are exploring molecular insights of the light harvesting dynamics, imposing direct constraints on the carbon uptake, especially when excessive energy arrives to the vegetation. To establish the link between vegetation fluorescence and the core photosynthetic light reaction dynamics, further advanced signal processing is proposed which allows a quantitative exploitation of the obtained fluorescence signal. Hereby, full spectral information in the region 500-800 nm is used as input for the proper processing of the top-of-canopy fluorescence emission considering a bottom-up pigment molecular-level approach.
One of the essential products proposed is the fluorescence quantum efficiency which is the ratio between the emitted fluorescence quanta and the absorbed quata that trigger the emission. The latter refers to the radiation absorbed by the light-harvesting pigments, with Chlorophyll a molecules as the dominant photoreceptors of the solar incoming radiance. Disentangling the differential absorption of the overlapping pigments is shown based on the spectral fitting of the FLORIS–HR 500-780 nm reflectance product using individual pigment absorption coefficients. The spectrally-resolved fAPAR contribution for Chlorophyll a is retrieved, considering the within-leaf and within-canopy multiple absorption and scattering. The canopy-leaving vegetation fluorescence is further consistently corrected for re-absorption and scattering, whereupon the ratio of the corrected emission over the retrieved absorption is calculated as the fluorescence quantum efficiency (FQE). FQE can be used as a first indicator for the photosynthetic efficiency of the vegetation surface and is indicative for the excitation pressure on the Chlorophyll molecules and by assumption the whole photosynthetic antenna system. Despite the relationship tends to be more complex than that due to the activation of non-photochemical quenching mechanisms which changes the qualitative coupling between fluorescence and photosynthesis, the retrieval of FQE serves as the essential step to quantify more precisely the energy eventually used by the carbon reactions. Further, by using a bottom-up approach to characterize and fit the shape of spectral fluorescence emission, additional information can be gained on the energy partitioning mechanisms in the light-harvesting reactions, through the two photosystems, PSI and PSII.
With these advances in the interpretation of the vegetation fluorescence signal, both quantitatively and qualitatively, the actual light use through photosynthesis and vegetation growth with carbon assimilation will be better quantified by FLEX. Hence, by the retrieval of FQE, combined with additional information on the dynamic regulation of the energy pathways in the light reactions, promising opportunities are presented to improve our understanding of the vegetation dynamics in the global carbon cycle.
The Amazon’s forests are at risk from continuous deforestation and climate change, leading to increased vulnerability to forest degradation (Matricardi et al., 2020). These processes weaken the forests’ environmental services. Meanwhile, regrowing secondary forests post-disturbance and agricultural abandonment have the potential to partially offset carbon losses (Heinrich et al., 2021). Understanding these opposite drivers of carbon dynamics is of great importance as studies find that regions of the Amazon are already acting as a source. At the same time, forest degradation is not accounted for in national commitments to reductions in greenhouse gas emissions (Silva Junior et al., 2021).
Contributing to the European Space Agency’s Regional Carbon Cycle Assessment and Processes – Phase 2 (ESA RECCAP2) project, we explore the use of remote sensing data for the monitoring of changes in the Amazon’s aboveground carbon (AGC) stocks. We used the L-Band Vegetation Optical Depth (L-VOD) from the soil moisture and ocean salinity (SMOS) satellite mission over the 2011-2019 time period as a valuable new asset to reveal the locations and extent of recent changes in AGC over the Amazon biome. The coarse resolution of L-VOD data (0.25°) allows limited attribution to processes occurring at finer scales. We address this by combining high resolution (30 m) landcover data mapping annual forest cover change and new degradation (Vancutsem et al., 2021) with static AGC maps. This allows us to model spatially specific gains and losses from deforestation, degradation and secondary forest regrowth and compare and consolidate these estimates with L-VOD inferred AGC change.
Initial results reveal that areas with significant decreasing AGC trends are five times greater than those showing an increase. The Amazon carbon stocks are declining with a ~2% reduction since 2012. L-VOD top-down and modelled bottom-up estimates agree on areas of greatest loss, though regional disagreements are evident for low biomass/agricultural areas or areas with small-scale disturbances. Deforestation accounts for the greatest carbon losses and is increasingly occurring in secondary forest areas. Losses incurred by forest degradation are estimated to be approximately 65% of those from deforestation. Further, L-VOD inferred changes over areas that are mostly intact old-growth forests reveal considerable inter-annual variability of AGC and reductions in the 2011-2019 time period over the South-Eastern Amazon.
Our findings point towards an overall weakening of the Amazon forest’s potential to mitigate climate change due to increasing deforestation. Therefore, recent pledges by Amazon countries including Brazil at COP26 to end and reverse deforestation by 2030 must be acted upon immediately to avoid its cascading effects, leading to degradation and further future carbon loss.
Heinrich, V. H. A., Dalagnol, R., Cassol, H. L. G., Rosan, T. M., Torres, C., Almeida, D., … Aragão, L. E. O. C. (2021). Large carbon sink potential of Amazonian Secondary Forests to mitigate climate change. Nature Communications, 12, 4–6. https://doi.org/10.1038/s41467-021-22050-1
Matricardi, E. A. T., Skole, D. L., Costa, O. B., Pedlowski, M. A., Samek, J. H., & Miguel, E. P. (2020). Long-term forest degradation surpasses deforestation in the Brazilian Amazon. Science, 369(6509), 1378–1382. https://doi.org/10.1126/SCIENCE.ABB3021
Silva Junior, C. H. L., Carvalho, N. S., Pessôa, A. C. M., Reis, J. B. C., Pontes-Lopes, A., Doblas, J., … Aragão, L. E. O. C. (2021). Amazonian forest degradation must be incorporated into the COP26 agenda. Nature Geoscience, 14(9), 634–635. https://doi.org/10.1038/s41561-021-00823-z
Vancutsem, C., Achard, F., Pekel, J.-F., Vieilledent, G., Carboni, S., Simonetti, D., … Nasi, R. (2021). Long-term (1990-2019) monitoring of forest cover changes in the humid tropics. Science Advances, 7(10). https://doi.org/10.1126/sciadv.abe1603
The availability and temporal dynamics of vegetation biomass, or living and dead fuel, is a main driver for the occurrence, spread, intensity and emissions of fires. Several studies have shown that the fuel build-up in antecedent (wet) seasons can increase burned area in the following (dry) season. In order to estimate fire emissions, the amount and dynamics of fuel loads are commonly estimated using biogeochemical models. Alternatively, data-driven approaches to model fire dynamics using machine learning methods make often use of satellite time series of leaf area index (LAI), the fraction of absorbed photosynthetic active radiation (FAPAR), or of vegetation optical depth (VOD) as proxies of the temporal dynamics in fuel availability. Although LAI or FAPAR time series provide information about the temporal dynamics in vegetation and hence fuels, they cannot be directly used to estimate fire emissions. Alternatively, global or continental maps of fuel beds or maps of above-ground biomass such as from ESA’s Climate Change Initiative (CCI) provide direct estimates of fuel loads for different fuel types, however, they do not provide sufficient temporal coverage in order to assess temporal changes in fuel loads. Here we propose a novel data-driven approach to estimate the temporal dynamics of vegetation fuel loads by combining various Earth observation products with information from databases of ground observations.
Our approach combines the temporal information from LA, FAPAR and VOD time series and from annual land cover maps with the time-invariant information from maps of above-ground biomass and large-scale fuel data bases. Specifically, we are using the LAI and FAPAR from Sentinel-3 and Proba-V, VOD from the VODCA dataset and from SMOS, annual land cover maps from ESA CCI, maps of above-ground biomass (AGB) from ESA CCI and information from the North America Wildland Fuel Database and the Biomass and Allometry Database.
The estimation of fuel loads is based on two different approaches. The first approach makes use of an empirical allometry model to estimate the fuel loads of different biomass compartments of trees and herbaceous vegetation by using total AGB and LAI as input. Based on allometric equations the biomass of stems, branches, leaves, and total woody biomass are estimated. Thereby LAI serves a s a proxy for the temporal dynamics in leaf and herbaceous biomass. Long-term changes in total AGB estimated based on regional non-linear regressions between the spatial patterns of AGB and tree cover, maximum LAI and VOD as predictors. As alternative, the use of novel products of AGB changes such as from BIOMASCAT are explored. The allometric parameters are estimated from the Biomass And Allometry Database.
The second approach makes use of machine learning models to transfer the measurements from the North America Wildland Fuel Database to other regions. Land cover, LAI and AGB from the Earth observation datasets are used as predictors for fuel loads of trees, shrubs, grass, fine and coarse woody debris and duff. Spatial cross-validation is used to estimate, evaluate random forest regression models and to provide uncertainty estimates of fuel loads. The approaches are developed and tested in four study regions: in Brazil, southern Africa, central Asia, and northern Siberia to cover a wide range of ecosystems. First results demonstrate the feasibility to estimate temporal changes in
fuels loads by integrating the respective temporal and spatial information from various Earth observation datasets.
For this work, we acknowledge the European Space Agency for funding of the Sense4Fire (sense4fire.eu) project.
Nature-based carbon sequestration is one of the most straightforward ways to extract and to store carbon dioxide from the atmosphere.
Urban forests hold the promise of optimized carbon storage and temperature reduction in cities. Remote sensing imagery can identify tree location and size, classify trees based on their species, and track tree health. Using multi- and hyperspectral overhead imagery, green vegetation can be separated from various land use types. Moreover, through further refinement of models by texture and contextual information, trees can get spatially separated from bushes and grass covered surfaces. While spectral-based tree identification can achieve accuracy of 90%, additional deep learning models using even noisy labeled data can further improve tree identification models.
Once trees are identified in two-dimensional remote sensing images, allometric models allow to extract tree height and tree growth based on climate data, topography, and soil properties. The biomass of the trees is calculated for tree species using geometrical and phenological models. The carbon stored in trees can be quantified at individual tree level. Furthermore, the models allow to identify areas densely covered by trees to pinpoint bare land where further trees may be planted.
Exploiting land surface temperature maps from satellite thermal measurements of, e.g., the Sentinel or Landsat missions, urban heat island can be mapped out at city scale. Urban heat islands may vary based on season and weather conditions; areas persistently warmer when compared to average city temperature background can be identified from time series of data. The correlation of local temperature, tree cover, and land perviousness helps to identify local climate zones. It also may refine and re-evaluate the definition of Local Climate Zones (LCZ). We employ the PAIRS geospatial information platform to demonstrate a scalable solution for tree delineation, carbon sequestration, and urban heat island identification for three global cities: Madrid, New York City, and Dallas, TX.
Natural and anthropogenic disturbances act as strong drivers of tree mortality, shaping the structure, composition, and biomass distribution of forests. Disturbance dynamics may change over time and vary in space, mainly depending on the climate regimes and land use and land cover change. Although well defined from a mechanistic perspective, different disturbances are currently not well characterized, and limited studies have formally quantified the link between frequency, intensity, and aggregation characterizing different disturbance regimes and biomass patterns and dynamics.
Here, we design a model-based experiment to investigate the links between disturbance regimes at the landscape scale and spatial features of biomass patterns. The effects on biomass of a wide range of disturbance regimes are simulated based on different \mu (probability scale), \alpha (clustering degree), and \beta (intensity slope) that respectively shape the extent, frequency, and intensity of disturbance events. A simple dynamic carbon cycle model is used to simulate 200 years of plant biomass dynamics in response to circa +2000 different disturbance regimes, depending on different combinations of \mu, \alpha, and \beta. Each parameter combination yields a spatially explicit estimate of plant biomass for which different synthesis statistics are estimated (such as, e.g. mean, median, standard deviation, quantiles, skewness). Based on a multi-output regression approach we link these synthesis statistics back to the three disturbance parameters to evaluate the confidence in inferring disturbance regimes from spatial distributions of biomass alone.
Our results show that all three parameters can be confidently reproduced using a reasonable set of statistical features on the biomass spatial distribution. The Nash-Sutcliffe efficiency (NSE) for the prediction of the three disturbance regime parameters exceeds 0.95. A feature importance analysis reveals that the distribution statistics dominate the prediction of \mu and \beta, while features quantifying texture have a stronger connection with \alpha. With the support of biomass observations, like global biomass datasets from the ESA DUE GlobBiomass project, the disturbance regimes at the landscape level can be retrieved under this simulation framework. Despite the current assumptions on primary productivity, autocorrelation, and similarity in post-disturbance dynamics, this study quantifies the association between biomass patterns and underlying disturbance regimes. Given that current earth observation datasets on biomass at high resolution have a very limited temporal range, if any, this approach could provide a unique perspective in deriving aspects of biomass dynamics from high-resolution imagery. Overall, a better understanding and the quantification of disturbance regimes would improve our current understanding of controls and feedback at the biosphere-atmosphere interface and improve current earth system models representing disturbance dynamics.
Land-use and land-cover changes (LULCC) are a major contributor to anthropogenic emissions, making up about 10 % of total anthropogenic CO2 emissions over the last decade, and being the major source of emissions in certain countries. Despite its great importance, estimates of the net CO2 flux from LULCC (ELUC) have high relative uncertainties compared to other components of the global carbon cycle. One major source of uncertainty roots in the underlying LULCC forcing data, which are mostly generated through a combination of Earth observations and other statistical data streams. By implementing a new, high-resolution LULCC dataset (HILDA+) in a bookkeeping model (BLUE), we are able to illustrate spatial and temporal uncertainties in ELUC estimates related to (1) LULCC reconstructions and (2) the spatial resolution of the LULCC forcing. Compared to estimates based on LUH2, which is the LULCC dataset most commonly used in global ELUC models, estimates based on HILDA+ show substantially lower ELUC fluxes and reveal large spatial and temporal differences in component fluxes (e.g., CO2 fluxes from deforestation). In general, the congruence is higher in the mid-latitudes compared to tropical and subtropical regions. However, little agreement is reached on the trend of the last decade between ELUC estimates based on the two LULCC reconstructions. By comparing ELUC estimates from simulations with the same LULCC forcing at 0.01° and 0.25° resolution, we find that component fluxes of estimates based on the coarser resolution tend to be larger compared to estimates based on the finer resolution, both in terms of sources and sinks. The reason for these differences is successive transitions: These are not adequately represented at coarser resolution, which has the effect that - despite capturing the same extent of transition areas - overall less area remains pristine at the coarser resolution compared to the finer resolution. This phenomenon has not been described in studies before. To our knowledge, this is the first study of global ELUC estimates (1) at 0.01° resolution and (2) based on two independently derived, spatial explicit LULCC datasets. The large sensitivity of greenhouse gas fluxes to the land-use forcing highlights the high relevance of Earth-observation to monitor LULCC dynamics, in particular at high resolution. Integration with other data sources on LULCC ranging back in the pre-satellite era, which is a requirement to capture the long timescales of carbon cycle dynamics, should also be a key priority in order to robustly quantify ELUC emissions.
The Land Surface Carbon Constellation study (https://lcc.inversion-lab.com), funded by ESA, aims to investigate the response of terrestrial biosphere’s net ecosystem exchange to climatic drivers. This is performed by combining a process-based model with a wide range of in-situ and remotely sensed observations on local and regional scales. The project aims to demonstrate the synergistic exploitation of satellite observations from active and passive microwave sensors together with optical data, for better characterization of carbon and water cycling on land.
In order to support the development of the model and the data assimilation scheme on the local scale, field campaigns are being carried out at three well-instrumented sites: (1) Sodankylä, Finland, located in a boreal evergreen needleleaved forest biome; (2) Majadas de Tietar, Spain, located in a temperate savanna biome, and (3) Reusel, The Netherlands, located over agricultural land.
At each site, an extensive suite of instrumentation has been installed to measure soil, vegetation and atmospheric properties. Permanent measurements include, among others, meteorological data, sensors measuring soil moisture profiles and the water content of standing vegetation, and eddy covariance systems to measure carbon, water and energy fluxes. Reference instrumentation to measure observables available from satellite remote sensing at local scale (microwave brightness temperature and backscatter, upwelling radiance) have been installed at the sites. These measurements are used to derive parameters such as Vegetation Optical Depth (VOD) and Solar-Induced Fluorescence (SIF) used in local-scale model assimilation experiments. Additional campaign measurements are being carried out to quantify seasonal variations in e.g. LAI, NDVI and above-ground biomass.
We present the main results of the first campaign season in 2021, describing instrumentation, data collection protocols, calibration and data quality control measures. Initial findings of interconnections between various physical processes and variables observed by remote sensing methods are presented. The Land Surface Carbon Constellation study is a collaborative project led by Lund University with participation of The Inversion Lab, CESBIO, University of Edinburgh, University of Reading, TU Delft, TU Wien, MPI-B, University of Valencia, WSL, FZ Jülich and FMI.
Accurate estimates of the net carbon flux from land use and land cover changes (fLULCC) are crucial to understand the global carbon cycle and to support climate change mitigation targets. However, it is difficult to derive fLULCC from observations at larger spatial scale because CO2 fluxes from land-use co-occur with those caused by natural effects (such as CO2 and climate change effects on vegetation growth). To support and complement Earth observations of vegetation and biomass dynamics, models are thus used to separate land-use from natural drivers. Here we investigate in unprecedented regional detail a fundamental difference between the two most frequently used types of models, namely semi-empirical bookkeeping models and process-based dynamic global vegetation models (DGVMs), which relates to how synergistic terms of land-use and natural effects are treated.
The fLULCC estimates from these two model types are not directly comparable: Bookkeeping models, which are used e.g. for fLULCC estimation in the annual global carbon budget of the Global Carbon Project, rely on static, observation-based carbon densities, and flux estimates are based on response curves characterizing the amount of carbon uptake and removal following land use and land cover changes. In contrast, fLULCC estimated by DGVMs is based on a process-based representation of the vegetation dynamics forced by observed (transient) environmental changes. Such a transient DGVM approach is used for the uncertainty assessment of fLULCC in the Global Carbon Project’s budget.
However, the transient DGVM approach includes the so-called Loss of Additional Sink Capacity (LASC), which accounts for environmental impacts on the carbon stock densities of managed land as compared to those of potential vegetation. By contrast, the LASC is not included in bookkeeping models. A comparison of the two types of models is nevertheless possible as DGVMs also enable the fLULCC estimation under constant present-day environmental forcing, which is comparable to bookkeeping models using observed carbon densities. Additionally, DGVMs enable fLULCC under constant pre-industrial environmental forcing which can be used to quantify the LASC.
To shed light into the performance of the varying approaches, this study analyzes the three most common DGVM-derived fLULCC definitions (transient, constant pre-industrial and constant present-day environmental conditions). We quantify differences in fLULCC estimates as well as the corresponding climate- and CO2-induced components resulting from environmental flux changes for 18 regions and by using twelve different DGVMs. The global multi-model mean fLULCC of the transient simulations is 2.0±0.6 PgC yr-1 for 2009-2018, of which ~40% stem from the LASC (0.8±0.3 PgC yr-1). The transient fLULCC accumulated from 1850 onward reached 189±56 PgC with 40±15 PgC from the LASC.
We detect regional hotspots of high LASC values particularly in the USA, China, Brazil, Equatorial Africa and Southeast Asia, which can predominantly be linked to massive deforestation for cropland. While these high LASC values mainly depend on the long accumulations periods for LASC in the temperate zone, high LASC values in the tropical zone result from mostly more recent deforestation on carbon dense ecosystems. In contrast, distinct negative LASC estimates were observed in Europe (caused by early reforestation before the start of the simulated period) and from 2000 onward in the Ukraine (due to recultivation of post-Soviet Union abandoned agricultural land). Such negative LASC estimates indicate that fLULCC under transient DGVM simulations is lower compared to bookkeeping estimates, in the respective regions.
Unraveling the strong spatio-temporal variability of the different DGVM-derived fLULCC estimates, this study shows the need for a harmonized attribution of model-derived fLULCC. To bridge the bias in fLULCC estimation between bookkeeping and DGVM approaches, we propose an approach that includes an adopted mean DGVM-ensemble LASC for a defined reference period. Such harmonized approach would be spatio-temporally robust, enabling a fair attribution of fLULCC, and could provide the needed measures to independently validate policy reporting of fLULCC as well as track the progress towards Global Stocktake.
The implementation of land management is widely included in national climate mitigation strategies as negative carbon technology. The effectiveness of these land mitigation techniques to extract atmospheric carbon is however highly uncertain. The H2020 LANDMARC, Land Use Based Mitigation for Resilient Climate Pathways, project monitors actual land mitigation sites to improve the understanding of their impact on the carbon cycle and focuses on the development of accurate and cost-effective monitoring techniques. Here we aim to assess the ability of satellite-based solar-induced fluorescence (SIF) observations to quantify the impact of land cover changes on the terrestrial gross primary production (GPP) – the carbon fixated during photosynthesis.
We use SIF measurements from the European TROPOMI and GOME-2A sensors to monitor the GPP dynamics following land cover change. We evaluate the impact of changed land cover on GPP for two distinct case studies with (1) an increasing trend in GPP (negative carbon emission) and (2) a decreasing trend in GPP (positive carbon emission) by examining the time-series of SIF signal over both cases. The positive carbon emission case concerns a massive wildfire in South-East Australia in which 220 km2 of Eucalypt Forest burned down from January to February 2019. The negative emission case examines China’s large scale afforestation project, the Three-North Shelterbelt Program (TNSP), which started in the 1980’s to combat desertification.
We analysed the TROPOMI SIF signal over burned and surrounding unburned area to elucidate the reduction in GPP following the destruction of vegetation in the positive carbon emission case. We detected a strong reduction in SIF (70%) immediately after the fire and smaller reductions in SIF (22%) over the winter period, June–July, when vegetation is mostly dormant. The reduction in SIF signal was scaled to loss in GPP via an obtained empirical linear SIF—GPP relation. Namely, positive agreement (R2=0.73) was discerned between TROPOMI SIF and GPP from a neighbouring flux site, located in a similar ecosystem. Overall, we identified a GPP deficit of ~9.05 kgCm-2, or 2TgC, for the first 10 months after the fire. This deficit is 1-2 magnitudes larger than the anomalies linked to intense summer droughts, indicating the significant long-term effects of local wildfires on the carbon cycle.
For the negative carbon emission case, we analyse long timeseries of GOME-2A SIF (2007—2020) over the TNSP region. We use statistical data on local afforestation in synergy with the SIF observations and compare yearly and seasonal trends for different sub-regions in the area in order to reveal the impact of the implementations on the regional carbon sink. Large scale monitoring of different land management strategies, especially in difficult dryland areas such as the TNSP region, and their success rate is an important step to support policy makers in designing and upscaling of land mitigation techniques.
The FLUXCOM initiative (www.fluxcom.org) conducted an extensive intercomparison of machine learning models that integrates satellite and in-situ observations for assessing variations of terrestrial carbon fluxes globally. This intercomparison yielded a large ensemble of gridded data products that are used extensively by the scientific community for questions related to biosphere-atmosphere interactions. The results of the FLUXCOM initiative yielded insights on uncertainties and limiting factors related to both, the overall approach, and to specific choices and implementations, which provides a roadmap for making progress in this field.
Here we want to report on the strategy and first results of FLUXCOM-X, which is the next generation of the FLUXCOM initiative. We are focusing on two distinct aspects for improving and accelerating progress of the FLUXCOM approach. Our first goal is to feed more information to the system. Specifically, we strive for the integration and synergistic exploitation of different earth observation data streams (optical, thermal, microwave, fluorescence) and for utilizing an improved and extended data basis of in-situ flux tower data. Our second objective is to develop the capacity to generate rapid experimental cycles by automatizing data and processing pipelines ranging from the ingestion of newly acquired eddy covariance and satellite data to the evaluation of the global products. Together, both improvements will enable an efficient exploration of novel methodological and data opportunities, as well as global carbon flux data product updates in a fairly operational way. Thus, FLUXCOM-X is not another static intercomparison but a path of experimental cycles with monitored performance that generates a diverse ensemble of products through scientific exploration.
Here we present first results of global gross primary productivity and net ecosystem exchange products at 0.05° spatial and hourly temporal resolution for the period 2001-2020. We assess the progress we have made and lessons learned so far based on site-level cross-validation results and cross-consistency checks of global carbon flux products against previous FLUXCOM results and independent data streams. We conclude with an outlook for the synergistic integration of FLUXCOM with atmospheric inversion approaches for obtaining a unified data-driven approach for monitoring the terrestrial carbon cycle from space.
Methane (CH4) is an important anthropogenic greenhouse gas with a global warming potential 28 times that of carbon dioxide on a 100-year time horizon. Global and regional greenhouse gas budgets can be estimated from various modelling tools, but there are still discrepancies in regional budgets and seasonality, depending on model setups such as observations used as constraints. Recently, the number of atmospheric measurements from satellites is rapidly increasing and their retrieval quality is improving continuously. As those measurements have much higher spatial coverage compared to ground-based observations, the potential to better constrain spatial distribution is expected to be high. However, the availability of the satellite data depends highly on the sunlight and clouds, and therefore, seasonality may not be as well constrained as the ground-base measurements in regions where high-precision continuous surface data are available, such as Europe.
In this study, we examine the potential of satellite data to constrain CH4 budgets, especially at northern high latitudes (NHL) and Europe, using the CarbonTracker-Europe CH4 atmospheric inverse model. Fluxes are estimated by constraining the model using three sets of atmospheric CH4 observations; 1) ESA Sentinel-5 Precursor TROPOMI XCH4 retrievals of SRON Operational data, 2) TROPOMI XCH4 retrieved based on WFM-DOAS algorism, and 3) ground-based observations of surface CH4 from global and regional networks, e.g. ICOS and NOAA. The global CH4 fluxes are estimated for 2018, and analysed by comparing those from the different setups, and to those from the multi-model intercomparison study by GCP-CH4.
The global total CH4 emissions are in good agreement, regardless of the assimilated observations. However, the regional budgets, spatial distribution and regional seasonality show show differences, such that NHL wetland CH4 emissions are decreased from the prior when satellite data are assimilated,especially in summer. This was consistent regardless of the retrieval products. However, when the surface data are assimilated, the wetland emissions in NHL increased from the prior.
Photosynthesis is one of the most important mechanisms that enable life on Earth. It is a process where sunlight is converted to chemical energy by synthesizing sugars using water from the soil and carbon dioxide from the air. This mechanism is fundamental to life because it generates oxygen as a byproduct. Therefore, understanding and observing the global photosynthesis rate is crucial to have a better grasp of our climate system and the Earth's carbon cycle.
One commonly used proxy value to measure photosynthetic activity of plants is solar-induced chlorophyll fluorescence (SIF), which is a subtle light emission signal emitted around the red and the near-infrared wavelengths of the electromagnetic spectrum. Although the dynamic correlation between SIF and the non-photochemical and photochemical quenching mechanisms is still a scientific research topic in development, SIF has been evidenced to serve as an in-vivo indicator of the photosynthetic activity.
SIF can be measured at the Top Of Canopy (TOC) level using tower measurements or at the airborne and the satellite level using remote sensing techniques. To retrieve SIF globally, satellite remote sensing with high resolution spectrometers is required since the SIF signal is relatively weak and therefore difficult to separate from the satellite measured radiance. Several satellite missions currently provide a SIF product at discrete wavelengths or narrow spectral intervals, such as the TROPOspheric Monitoring Instrument (TROPOMI) on board Sentinel-5 and the Orbiting Carbon Observatory-2 (OCO-2). In the close future, the FLuorescence EXplorer (FLEX) mission from ESA plans to provide spectrally resolved fluorescence spectra as one of the photosynthesis-related mission products.
Current methods to retrieve SIF are statistically-based and usually utilize solar Fraunhofer lines to disentangle the SIF signal from the atmospheric and surface contribution of the satellite measured radiance. The solar Fraunhofer lines are "dark" absorption spectral lines that are convenient to use to discern the additive SIF signal from the surrounding reflected radiance. In general, any absorption feature either telluric- or solar-originated is an advantageous region to disentangle the SIF contribution.
In this work, we have focused on the SIF retrieval at the solar lines to reduce, as much as feasible, the possible interference with the aerosol scattering effects while keeping the possibility to expand the retrieval to the oxygen-A region in a second phase. The proposed retrieval is an adaption of the Peak Height method initially developed to exploit the oxygen bands at the TOC level. This methodology is based on exploiting the peaks' height and shape in the surface apparent reflectance generated by the emission of the fluorescence signal. As a proof of concept, we have adapted the Peak Height method to the solar lines (around 750-760 nm) and assessed the performance of the proposed method with a simulated database. Additionally, we will also present its potential application on TROPOMI scenes and compare the results with the existing SIF products.
We know that tropical peatlands are among the most carbon dense ecosystems globally¹ but their distribution and total below-ground carbon stock remain highly uncertain, with recent estimates of the latter ranging from 105 (70–130) to 215 (152–288) Pg C²,³ (note the non-overlapping 95% confidence intervals). We also know that large areas of tropical peatlands have been degraded and drained with Indonesia being a cautionary tale: in 1997 alone an estimated 0.81 to 2.57 Pg C, or 13–40% of global annual fossil fuel emissions, were released from Indonesian peatland fires⁴. Whilst 80% of South-East Asian peatlands are already cleared and drained⁵, the known peatland areas of the Amazon and Congo basins are believed to be largely intact³,⁶. Protecting and restoring tropical peatlands can make a significant contribution to limiting CO₂ emissions and global warming, but policy instruments such as REDD+ and wider Nationally Determined Contributions to the Paris Agreement⁷, must be informed by high resolution maps of peatland distribution and carbon density.
Peru is known to host significant areas of peatland such as the Pastaza-Marañón Foreland Basin (PMFB) which has been estimated to contain 3.14 (0.44–8.15) Pg C (including above and below-ground components)⁸. However, visual examination of remote sensing imagery and published wetland maps⁹ suggest that there could be substantial peatlands in Peru whose distribution and carbon stocks remain unknown. Moreover, maps of even the best quantified peatlands remain highly uncertain, in large part due to a lack of understanding of peat thickness distribution. We also lack any quantitative assessment of land-use induced greenhouse gas emissions in Peruvian peatlands, despite varied and increasing threats to these ecosystems⁶. New legislation in Peru mandates the protection of peatlands for the purposes of climate change mitigation¹⁰, but will require maps of peatland distribution and their disturbance at nationally relevant scales.
In this talk, we present new maps of peat thickness (Fig. 1) and peat carbon distribution across lowland Peruvian Amazonia, amounting to a below-ground stock of 5.4 (2.6–10.6) Pg C. These results are driven by machine learning models which combine the largest database of peat observations ever collected in Peru, with remote sensing imagery including various sentinel-2 bands and indices. We reveal highly variable peat thickness and substantial new peatland regions in basins such as the Napo, Putumayo and Ucayali. In turn, we apply our maps and national land-cover change data to show small but increasing areas of forest loss, and related CO₂ emissions from peat decomposition.
Our results may be used to inform the implementation of recent Peruvian legislation enacted to reduce GHG emissions¹⁰, and call for the protection of this substantial and relatively intact carbon store to prevent a similar scenario to South-East Asian peatlands.
References
1. Honorio Coronado, E. N. H., Hastie., A, et al. Intensive field sampling increases the known extent of carbon-rich Amazonian peatland pole forests. Environ. Res. Lett. 16, 74048 (2021).
2. Ribeiro, K. et al. Tropical peatlands and their contribution to the global carbon cycle and climate change. Glob. Change Biol. 27, 489–505 (2021).
3. Dargie, G. C. et al. Congo Basin peatlands: threats and conservation priorities. Mitig. Adapt. Strateg. Glob. Chang. 24, 669–686 (2019).
4. Page, S. E. et al. The amount of carbon released from peat and forest fires in Indonesia during 1997. Nature 420, 61–65 (2002).
5. Mishra, S. et al. Degradation of Southeast Asian tropical peatlands and integrated strategies for their better management and restoration. J. Appl. Ecol. 58, 1370–1387 (2021).
6. Roucoux, K. H. et al. Threats to intact tropical peatlands and opportunities for their conservation. Conserv. Biol. 31, 1283–1292 (2017).
7. Girardin, C.A.J., et al. Nature-based solutions can help cool the planet — if we act now. Nature 593, 191–194 (2021).
8. Draper, F. C. et al. The distribution and amount of carbon in the largest peatland complex in Amazonia. Environ. Res. Lett. 9, 124017 (2014).
9. Hess, L. L. et al. Wetlands of the Lowland Amazon Basin: Extent, Vegetative Cover, and Dual-season Inundated Area as Mapped with JERS-1 Synthetic Aperture Radar. Wetlands 35, 745–756 (2015).
10. MINAM. Decreto Supremo N° 006-2021-MINAM (2021).
The ecosystems of the dry tropics are in flux: the savannas, woodlands and dry forests that together cover a greater area of the globe than rainforests are both a source of carbon emissions due to deforestation and forest degradation, and also a sink due to the enhanced growth of trees. However, both of these processes are poorly understood, in terms of their magnitude and causes, and the net carbon balance and its future remain unclear. This gap in knowledge arises because we do not have a systematic network of observations of vegetation change in the dry tropics, and thus have not, until now, been able to use observations of how things are changing to understand the processes involved and to test key theories.
Satellite remote sensing, combined with ground measurements, offers the ideal way to overcome these challenges, as it can provide regular, consistent monitoring at relatively low cost. However, most ecosystems in the dry tropics, especially savannas, comprise a mixture of grass and trees, and many optical remote sensing approaches (akin to enhanced versions of the sensors on digital cameras) struggle to distinguish changes between the two. Long wavelength radar remote sensing avoids this problem as it is insensitive to the presence of leaves or grass, and also is not affected by clouds, smoke or the angle of the sun, all of which complicate optical remote sensing. Radar remote sensing is therefore ideal to monitor tree biomass in the dry tropics. We have successfully demonstrated that such data can be used to accurately map woody biomass change for all 5 million sq km of southern Africa.
In SECO we will create a network of over 600 field plots to understand how the vegetation of the dry tropics is changing. and complement this with radar remote sensing to quantify how the carbon cycle of the dry tropics has changed over the last 15 years. This will provide the first estimates of key carbon fluxes across all of the dry tropics, including the amount of carbon being released by forest degradation and deforestation and how much carbon is being taken up by the intact vegetation in the region. By understanding where these processes are happening, we will improve our knowledge of the processes involved.
We will use these new data to improve the way we model the carbon cycle of the dry tropics, and test key theories. The improved understanding, formalised into a model, will be used to examine how the dry tropics will respond to climate change, land use change and the effects of increasing atmospheric CO2. We will then be able to understand whether the vegetation of the dry tropics will mitigate or exacerbate climate change, and we will learn what we need to do to maintain the structure of the dry tropics and preserve its biodiversity.
Overall, SECO will allow us to understand how the vegetation of the dry tropics is changing, and the implications of this for the global carbon cycle, the ecology of savannas and dry forests, and efforts to reduce climate change. The data we create, and the analyses we conduct will be useful to other researchers developing methods to monitor vegetation from satellites, and also to those who model the response of different ecosystems to climate and other changes. Forest managers, ecologists and development practitioners can use the data to understand which parts of the world's savannas and dry forests are changing most, and how these changes might be managed to avoid negative impacts that threaten biodiversity and the livelihoods of the 1 billion, mostly poor, rural people who live in this region.
Increasing atmospheric CO2 concentration will have a direct impact on the carbon cycle though the stimulation of photosynthesis. Free Air CO2 Enrichment (FACE) experiments have been used to quantify this ‘fertilisation’ effect under CO2 concentrations anticipated for the middle and later decades of this century. There is increasing evidence that the increase in CO2 to date has enhanced productivity, but attribution to CO2 fertilisation remain a challenge. As the length of the satellite record extends, and new sensors and retrievals develop, satellite observations of the biosphere will be crucial for providing the data for model integration and improving our confidence in quantifying elevated CO2 impacts on the carbon cycle. An important emerging retrieval for studies of photosynthesis is solar induced fluorescence (SIF), which is emitted during photosynthesis and differs from reflectance-based metrics as it provides a measure of activity rather than capacity. Strong empirical relationships are observed between SIF and measures of photosynthesis, and SIF-focussed missions are currently in development for launch in the next few years. However, we still require detailed SIF data from the field to understand and interpret space-based SIF retrievals.
We have collected hyperspectral data from a UAV platform at a FACE experiment in an oak forest in the UK to develop an understanding of the signals associated with elevated CO2 that can be measured from remote sensing platforms. This experiment is unique as it is the only FACE experiment in a mature temperate forest; ecosystems which are responsible for a substantial component of global biosphere carbon sequestration. Using a dual-field-of-view spectrometer system mounted on the UAV, we are able to collect both full VIS-NIR reflectance at high resolution, and very high resolution reflectance in the red-edge region for SIF retrieval, from the forest canopy abovEar e each of the treatment arrays (30 m diameter). Early results have indicated that SIF yields (SIF per incoming photosynthetically active radiation) are higher under elevated CO2, which may be attributed to physiological and or leaf area effects. In this presentation we will present the response of both SIF and other reflectance metrics to elevated CO2 from campaigns that span different times in the season and different seasons, as well as explore changes in spectral features associated with the increase in CO2. We will also place the treatment-level responses in context with the wider forest using hyperspectral data collected during an airborne campaign conducted on the same day as a UAV campaign, as well as data from the longer-term satellite record from Sentinel 2. We will discuss the potential for measuring the the impacts of increasing CO2 on temperate forests from space.
Long-term global monitoring of terrestrial Gross Primary Production (GPP) is crucial for assessing ecosystem response to global climate change. In recent decades, great advances have been made in estimating GPP and many global GPP datasets have been published. These datasets are either based on observations from optical remote sensing, are upscaled from in situ measurements, or rely on process-based models. Although these approaches are well established within the scientific community datasets nevertheless differ significantly.
Here, we introduce the new VODCA2GPP product (Wild et al., 2021), which utilizes microwave remote sensing estimates of Vegetation Optical Depth (VOD) to estimate GPP at global scale for the period 1988 -2020. VODCA2GPP applies a previously developed carbon sink-driven approach (Teubner et al., 2019, 2021) to estimate GPP from the Vegetation Optical Depth Climate Archive (Moesinger et al., 2020), which merges VOD observations from multiple sensors into one long-running, coherent data record. VODCA2GPP was trained and evaluated against FLUXNET in situ observations of GPP and compared against largely independent state-of-the-art GPP datasets from MODIS, FLUXCOM GPP and the TRENDY-v7 process-based model ensemble.
The site-level evaluation with FLUXNET GPP indicates an overall robust performance of VODCA2GPP with only a small bias and good temporal agreement. The comparisons with MODIS, FLUXCOM and TRENDY show that VODCA2GPP exhibits very similar spatial patterns across all biomes but with a consistent positive bias. In terms of temporal dynamics, a high agreement was found for regions outside the humid tropics, with median correlations around 0.75. Concerning anomalies from the long-term climatology, VODCA2GPP correlates well with MODIS and TRENDY-v7 GPP (Pearson’s r: 0.53 and 0.61) but less well with FLUXCOM GPP (Pearson’s r: 0.29). A trend analysis for the period 1988-2019 did not exhibit a significant trend in VODCA2GPP at global scale but rather suggests regionally different long-term changes in GPP. For the shorter overlapping observation period (2003-2015) of VODCA2GPP, MODIS GPP, and the TRENDY-v7 ensemble significant increases of global GPP were found. VODCA2GPP can complement existing GPP products and is a valuable dataset for the assessment of large-scale and long-term changes in GPP for global vegetation and carbon cycle studies. The VODCA2GPP dataset is freely accessible at TU Wien Research Data (https://doi.org/10.48436/1k7aj-bdz35; Wild et al., 2021).
Moesinger, L., Dorigo, W., de Jeu, R., van der Schalie, R., Scanlon, T., Teubner, I., and Forkel, M. & 2020: The global long-term microwave Vegetation Optical Depth Climate Archive (VODCA). Earth Syst. Sci. Data, 12, 177–196, https://doi.org/10.5194/essd-12-177-2020.
Teubner, I. E., Forkel, M., Camps-Valls, G., Jung, M., Miralles, D. G., Tramontana, G., van der Schalie, R., Vreugdenhil, M., Mösinger, L. & Dorigo, W., 2019: A carbon sink-driven approach to estimate gross primary production from microwave satellite observations, Remote Sens. Environ., 229, 100–113, https://doi.org/10.1016/j.rse.2019.04.022.
Teubner, I. E., Forkel, M., Wild, B., Mösinger, L. & Dorigo, W., 2021: Impact of temperature and water availability on microwave-derived gross primary production. Biogeosciences, 18, 3285–3308, https://doi.org/10.5194/bg-18-3285-2021.
Wild, B., Teubner, I., Moesinger, L., Zotta, R., Forkel, M., van der Schalie, R., Sitch, S., Dorigo, W., 2021: VODCA2GPP – A new global, long-term (1988–2020) GPP dataset from microwave remote sensing. Earth Syst. Sci. Data Discuss. [preprint] https://doi.org/10.5194/essd-2021-209, in review, 2021.
Accurate quantification of gross primary productivity (GPP) is critical to understand the global carbon cycle, and how ecosystem primary productivity might respond to climate change. However, terrestrial GPP is viewed as the largest and most uncertain portion of the global carbon cycle. Its estimation at regional to global scale is still a challenge due to the high variability of GPP across space and time, and to the limited understanding of GPP drivers at all spatial and temporal scales.
The recent increase in availability of the complementary Copernicus Sentinel data (S-2, S-3, and S-5p) and products (e.g. OGVI-FAPAR, OTCI, SIF, etc.) at high spatial and temporal resolutions offers a new opportunity to quantify the dynamics of terrestrial ecosystem primary productivity with unprecedented detail. Therefore, the Sen4GPP project aims to develop algorithms that can synergistically exploit data from Copernicus Sentinel missions in order to better characterise GPP in space and time. A parallel objective is to determine the informational content brought by each Copernicus Sentinel mission on the GPP estimates, relative to their spatio-temporal resolutions and coverage and to their constraint on the biogeochemical processes controlling gross carbon uptake by terrestrial ecosystems.
Three different approaches are considered for the estimation of GPP in the Sen4GPP project: 1) Light Use Efficiency (LUE) models, based on the concept that ecosystem GPP is a function of the amount of photosynthetically active radiation (PAR) intercepted by a canopy, the fraction of that PAR that is actually absorbed by the canopy, and interacting environmental stress factors, 2) SIF-based approach, as it has been demonstrated recently that SIF and GPP hold a strong linear relationship at the daily-to-weekly and ecosystem-scale sampling of satellite remote sensing data, and 3) machine learning approach, which is the most data-adaptive method by design as it uses machine learning to extract the functional relationships from observations (in situ and EO) at site level.
In this contribution we will present the status of the project, and in particular the first results from the implementation of the different GPP estimation approaches.
The new generation of satellite missions from ESA has opened new opportunities to understand the complex dynamics of the earth system. Specifically, the new red-edge bands from Sentinel-2 can improve gross primary production (GPP) prediction at the regional and global scale. In this contribution, we will present how the optical information and vegetation indices (VIs) retrieved from Sentinel-2 can be used to predict GPP. We compiled 2636 imagery for 58 eddy covariance sites (2015-2018) that cover a broad geographical (from a latitude of 34.3 to 67.8) and biome range (croplands, deciduous broadleaf forests, evergreen needleleaf forests, grasslands, mixed forest, open shrublands, savannas, and wetlands). We compute several VIs, including red-edge vegetation indices such as chlorophyll index red (CIR), and other VIs such as normalized difference vegetation indices (NDVI) and Near Infrared Reflectance of vegetation (NIRv), as well as the novel kNDVI. Then, we compare the performance of each index to predict the GPP derived from the eddy covariance tower using linear regressions in a cross-validation scheme that avoids spatio-temporal auto-correlation. Furthermore, we explore how much the prediction of GPP is improved using machine learning techniques that consider VIs and spectral bands. Finally, as the different number of observations per vegetation impacts the prediction of GPP, we explore how various dataset balancing techniques can improve the prediction (i.e. A high the frequency of observations for a certain vegetation type can bias the model, underrepresenting other vegetation types). Using linear regressions based on NIRv, we achieved prediction powers of R210−fold = 0.56 and an RMSE10−fold = 2.75 [μmol CO2 m−2 s−1]. Using CIR, and kNVDI, we achieved significantly higher predictive power, up to R210−fold ≈ 0.6, and with a lower RMSE10−fold ≈2.6 [μmol CO2 m−2 s−1]. Using spectral bands and VIs jointly in a machine learning prediction framework we improved GPP prediction with a R210−fold = 0.71, and RMSE10−fold = 2.23 [μmol CO2 m−2 s−1]. We also found that balancing techniques represent an improvement in the prediction of GPP and need to be considered for future upscaling exercises. The proposed approach can estimate GPP at a level of accuracy comparable to previous works, which, however, required additional meteorological drivers with the associated uncertainty. The presented approach opens new possibilities to predict GPP at high spatial resolutions across the globe from Sentinel-2 data only.
Wildfires represent one of the major causes of ecosystem disturbance and ecological damage. Besides influencing atmospheric chemistry and air quality in terms of emitted greenhouse gases and the presence of aerosol in the atmosphere, they change land surface properties, causing loss of vegetation and impacts on forestry economy and local agriculture economy. Accurate knowledge of location and extent of a burned area (BA) is important for damage assessment and for monitoring vegetation restoration.
The present availability of Sentinel-2 (S2) multispectral data every 5-days on the same target area represents a unique opportunity to systematically produce BA maps at medium-high spatial resolution (20 m, which is the resolution of the SWIR bands). Several investigations demonstrated the suitability of S2 to detect BA. A continuous and systematic processing of S2 data potentially allows researchers to build a complete record of BAs useful to derive statistics about the impact of forest fires during the fire season. BA databases are available for instance through the European Forest Fire Information System (EFFIS), whose Rapid Damage Assessment (RDA) module maps BAs by analysing MODIS and VIIRS data having a spatial resolution that is coarser than the S2 one (although presently EFFIS-derived BAs are visually verified using S2 images too).
In this study, a BA record for the 2019-2021 fire seasons (June 1st - September 30th), derived from S2 and ancillary data, is presented. It was produced, for Italy, by taking advantage of a fully automatic processing chain, based on the AUTOmatic Burned Areas Mapper (AUTOBAM) tool proposed in Pulvirenti et al., (2020). AUTOBAM is an automated processor conceived for near real-time (NRT) mapping of BA using S2 data. To generate the BA record, S2 data are complemented by ancillary data, namely MODIS-derived and VIIRS-derived active fire products, as well as by fire notifications. Italy was chosen because the AUTOBAM tool was originally designed to respond to a request by the Italian Department of Civil Protection (DCP) regarding a systematic mapping of BAs at medium-high spatial resolution. Moreover, notifications from the firefighting fleet belonging to Joint Air Operating Centre (coordinated by DCP) and from the Unified Permanent Fire Protection Unit (provided by regional institutions) are available in NRT in Italy. Finally, burn perimeters derived from local surveys done by Carabinieri Command of Units for Forestry, Environmental and Agri-food protection are available too for validation purposes.
AUTOBAM uses level 2A (L2A) surface reflectance products to work with data corrected from the atmospheric effects and to take advantage of the availability of a scene classification map, which is useful to mask clouds, snow, and water bodies. As soon as new L2A products are available through the Copernicus Open Access Hub, they are automatically downloaded and processed. The processing firstly computes three spectral indices, namely the Normalized Burn Ratio (NBR), the Normalized Burned Ratio 2 (NBR2), and the Mid-Infrared Burned Index (MIRBI). These indices are defined as:
NBR=(ρ_NIR-ρ_(SWIR_L))/(ρ_NIR+ρ_(SWIR_L) ) (1)
NBR2=(ρ_(SWIR_S)-ρ_(SWIR_L))/(ρ_(SWIR_S)+ρ_(SWIR_L) ) (2)
MIRBI=〖10∙ρ〗_(SWIR_L)-〖9.8∙ρ〗_(SWIR_S)+2 (3)
Then, AUTOBAM applies a change detection approach that compares, pixel by pixel, the values of the indices acquired at current time with the values derived from the most recent cloud-free S2 data. By default, the latter data are acquired 5 days before current ones (corresponding to the S2 revisit time), but cloud cover may lengthen the time between the acquisitions. Pixels covered by clouds are masked out. BA mapping is performed by using different image processing techniques like clustering, automatic thresholding and region growing. Output maps are finally resampled to a common grid whose pixel size is 20m.
To generate a BA record, omission errors due to clouds or smoke do not represent a big problem because a missed BA can be detected using one of the subsequent S2 acquisitions over the same area (AUTOBAM systematically processes all the S2 data whose cloud cover is less than 50%). Conversely, commission errors due to clouds not perfectly detected in the L2A data, or to changes not related to fires (e.g., due to agricultural activities like harvesting) represent a critical aspect. To deal with commission errors, each BA includes three quality flags related to 1) the presence of an active fire according to MODIS; 2) the presence of an active fire according to VIIRS, 3) a fire notification. As for points 1) and 2), MODIS and VIIRS active fire data are systematically acquired and resampled to the common grid mentioned before (nearest neighbour). A buffer zone with buffering distance corresponding to half of the pixel size of the active fire data is created around each active fire point. If the BA overlaps the buffer zone, the corresponding quality flag assumes positive values (otherwise is 0). The value of the flag depends on the time difference between the S2 acquisition from which the BA is detected and the MODIS/VIIRS acquisition from which the active fire was detected. A maximum difference of 30 days is admitted. A similar procedure is applied for the notifications; in this case a nearest neighbour approach is used to transform the coordinates of a reported fire into points of the common grid and then, a buffer zone of 500m is created to verify the overlap with the S2-derived BAs. Only BAs with at least one quality flag >0 are selected to build the BA record.
The processing chain described above was applied to all the S2 observations of Italy in the period June 1st - September 30th of years 2019-2021. For 2019-2020, the AUTOBAM-derived BA record was compared to the burn perimeters derived from local surveys to verify their reliability (fire perimeters for 2021 are not yet available). For this purpose, even the perimeters were resampled to the common grid. The burn perimeters were required to overlap an AUTOBAM-derived BA; the size over the overlapped area was required to exceed 20% of both the AUTOBAM-derived BA and the area included in each perimeter. Burn perimeters < 1 ha were excluded. It was found that AUTOBAM was able to detect about 75% of the burn perimeters; 60% of the burn perimeters had at least a quality flag >0. This outcome indicates that the proposed method based on the use of the AUTOBAM processor has potential to generate a BA record.
South America is home to some of the world’s most important ecosystems, such as the Amazon, Cerrado, and Chiquitania forests. At the same time, it is a region of massive land conversion for the sake of increased production of commodities consumed globally. Across South America, the expansion of commodity land uses has underpinned substantial economic development at the expense of natural land cover and associated ecosystem services. In this paper, we show that such human impact on the continent’s land surface, specifically land use conversion and natural land cover modification, expanded by 268 million hectares (Mha), or 60%, from 1985 to 2018. By 2018, 713 Mha, or 40%, of the South American landmass was impacted by human activity. This is equivalent to 21.6 soccer fields of natural land cover being impacted by human activity every minute for 34 years. Changes in land cover of this magnitude have important consequences to climate at regional and global scales by altering fluxes of energy, water, and greenhouse gas emissions. Since 1985, the area of natural tree cover decreased by 16%, and pasture, cropland, and plantation land uses increased by 23, 160, and 288%, respectively. Low-intensity, low-productivity pastureland replacing natural vegetation and the widespread phenomenon of cropland replacing pastureland are two important dynamics that reflect the overall intensification of land use across South America. Beyond intensive land uses, a substantial area of disturbed natural land cover, totaling 55 Mha, had no discernable land use, representing land that is degraded in terms of ecosystem function but not economically productive. This long-lasting transitional land category may be associated with land speculation or land-tenure establishment. Monitoring natural land cover from initial disturbance to its final land use outcome is necessary to better understand land use pathways and to fully account for associated greenhouse gas emissions. Results presented here illustrate the extent of ongoing human appropriation of natural ecosystems in South America, which intensifies threats to ecosystem-scale functions. Such data, associated with emissions factors, can facilitate national greenhouse gas accounting efforts.
Fires in the tropics are driven by climate and land-use change. In the Amazon, fires are linked to biomass burning post deforestation and degradation fires are caused by extreme droughts. Earth system models predict an increase in the intensity of dry seasons in this region in the 21st century. Therefore, carbon emissions from drought induced fires have the potential to counteract pledged reductions of deforestation in the next decades, yet they are not included in national carbon emission estimates. Further, air pollution caused by fires has been linked to seasonal upturns in respiratory diseases affecting the population in fire prone areas of Brazil. Against the backdrop of the current COVID-19 pandemic, air pollution can potentially increase the risks of hospitalisations and mortality.
Improved assessments of fire emissions and their impact on air quality are therefore of high importance. Spatially specific estimations of fire emissions are made possible through a range of satellite products that are now available. We employ a remote sensing approach using observations of burned area and static biomass maps from the ESA CCI project to derive woody dry matter burned as a biome-specific function of unburned biomass and combine these with existing estimates of grassland and crop residue fuel consumption. Dry matter burned is converted to emission using a database of available emission factors. Based on this methodology we present initial estimates of dry matter burned and trace gas emissions for the entire Amazon basin and the Brazilian Cerrado at monthly intervals and compare our estimates to those of the global fire emissions database (GFED4). This allows us to identify areas of uncertainty in current emission estimates and present alternative workflows for generating improved regional products. These products will further be used to improve greenhouse gas budgets and to study effects on human health and ecosystem services.
There's an urgent need to catalyze economic incentives towards the regeneration of the planet and monitor the changes in the ecological state of ecosystems in a reliable, approachable and scalable way.
Soil organic carbon (SOC) sequestration in regeneratively managed rangelands can provide much needed contributions to the global carbon drawdown. However, methods to track changes in SOC over time often rely on either (1) intensive soil sampling which often prove cost prohibitive for land stewards, or (2) biogeochemical models that need local calibration (not always available) and typically lack uncertainty estimates. In an attempt to overcome these limitations and create a cost-effective approach to SOC stock estimation, we designed an open source methodology which uses statistical models to uncover correlations between Sentinel-2 satellite imagery and ground truth data to estimate soil carbon at unsampled locations. Through this methodology, the calibration of an image becomes possible when a significant correlation is found between the spectral values of the image at the sampled locations and the SOC concentrations, within a few months around the sampling date. Then, SOC% maps are generated and converted into stocks, using bulk density measurements. Finally, the changes in time of the carbon stocks are estimated from the difference between stocks from consecutive sampling rounds. The final creditable carbon change is a result of the change in the SOC stocks minus the GHG emissions from the cattle for the crediting period. An uncertainty discount is also applied if uncertainty is higher than 20%. In addition to a quantification of the carbon stock changes in time, the methodology includes an estimation of several co-benefits (soil health, ecosystem health and animal welfare) that help expand the analysis beyond solely carbon.
The method was used to estimate the annual changes in the SOC stocks and co-benefits at three rangelands under prescribed grazing located in New South Wales, Australia. Correlations were explored through linear and power regressions, as well as machine learning algorithms (e.g. Random Forest regression). The results with the highest accuracy were used to issue carbon credits that were sold to the voluntary carbon markets, and the data made public for others in the science community to test and explore other statistical or geospatial modeling techniques
We conclude that there is potential to leverage satellite remote sensing technology to measure changes in carbon stock over time in combination with significantly fewer sample points for training and verification than required for conventional carbon stock mapping. Yet we recognize the need to assess the strengths and limitations of this nascent technology by testing it across a variety of different environmental conditions and at different spatial and temporal scales. We expect this methodology to be widely tested and upgraded by the scientific community. Our foremost goal is to inspire and guide efforts in the development of high quality methods that leverage the best of technology and scientific knowledge in service to reliable carbon accounting.
Intensifying wildfires in high-latitude forest and tundra ecosystems are a major source of greenhouse gas emissions, releasing carbon through direct combustion and long-term degradation of permafrost soils and peatlands. Several remotely sensed burned area and active fire products have been developed, yet these do not provide information about the ignitions, growth and size of individual fires. Such object-based fire data is urgently needed to disentangle different anthropogenic and bioclimatic drivers of fire ignition and spread. This knowledge is required to better understand contemporary arctic-boreal fire regimes and to constrain models that predict changes in future arctic-boreal fire regimes.
Here, we developed an object-based fire tracking system to map the evolution of arctic-boreal fires at a sub-daily scale. Our approach harnesses the improved spatial resolution of 375m Visible Infrared Imaging Radiometer Suite (VIIRS) active fire detections. The arctic-boreal fire atlas includes ignitions and daily perimeters of individual fires between 2012 and 2021, and may be complemented in the future with information on waterbodies, unburned islands, fuel types and fire severity within fire perimeters.
Abstract
Development of Earth Observation tools with capacity to verify extraction of CO2 from the Atmosphere has high potential to help companies and societies to develop carbon free products and incentives to make it possible for farmers to speed up new types of carbon farming praxis´s.
The presented project has studied three large cereal producing farms in southern Sweden and has applied a long term farm perspective. Combined measurements in Sentinel data of soil organic carbon (SOC) in top soils and vegetation indexes (NDVI) has been used to investigate how the relationship between increased biomass production and possible improvement of soil organic carbon appear on field and farm level.
Most recent research shown which changed farming praxis has potential to increase carbon storage in soils. This research points to long term crop rotation with two to three year lay and cultivation of pees and other nutrient fixation crops in-between are favorable to increase carbon storage. In this paper we present two farms which has changed praxis after 2017 and one farm which uses conventional best praxis.
Sentinel 2 data is used, and we found it possible to get 20-30 cloud free images per year over the studied farms, only 1-2 images per year shows bear soils, which means that we have a good data provisioning for NDVI measurements while the SOC needs to rely very temporal data points. Mean values of NDVI and SOC for each individual field is calculated and use to construct a soil organic carbon curve and vegetation growth curve for 5 years.
Combining the indexes produces from satellite images, with profiles of differences in crop rotation for each field and physical soil samples in soils are used in multivariate analysis to investigate the robustness in the relationship between SOC and NDVI.
Detailed results is presented and further developments proposed.
Observations of upper atmospheric neutral mass density and wind are critical to understand the coupling mechanisms between Earth’s ionosphere, thermosphere, and magnetosphere. The ongoing Swarm DISC (data, innovation, and science cluster) project TOLEOS (thermosphere observations from low-Earth orbiting satellites) aims to provide accelerometer-derived neutral mass density and crosswind data from CHAMP, GRACE, and GRACE-FO satellite missions covering a time span of approximately 22 years. The project uses state-of-the-art models, calibration techniques, and processing standards to improve the accuracy of these data products and ensure inter-mission consistency. Here, we present preliminary results of the quality of the data in comparison to the high accuracy drag temperature model DTM2020 and physics-based TIE-GCM (thermosphere ionosphere electrodynamics general circulation model), and CTIPe (coupled thermosphere ionosphere plasmasphere electrodynamics) models. We present, for the first time, a comparison of GRACE and GRACE-FO neutral mass densities with ESA’s Swarm mission during a few time periods where the orbital planes of the satellites align with each other. The study also provides a comparison of these new neutral mass densities and neutral winds across multiple periods with vastly different solar and geomagnetic activities.
Topside Ionosphere Radio Observations from multiple Low Earth Orbiting (LEO)-missions (TIRO) is a project in ESA’s Swarm Data, Innovation, and Science Cluster (DISC) framework. TIRO extends Swarm Total Electron Content (TEC) products with data from other LEO satellites and provides high-accuracy topside TEC from dual-frequency GPS receivers onboard CHAMP (2000-2010), GRACE (2002-2017), and GRACE Follow-On (since 2018) missions. Special emphasis is put to ensure maximum consistency between the operationally derived data sets for the Swarm and GOCE missions to allow for direct comparison. Moreover, GRACE and GRACE-FO are equipped with a K-Band inter-satellite Ranging System (KBR), which in turn is used to derive an estimate of the in-situ electron density. With all the satellites considered, altitude regions from as low as 250 km (GOCE) up to nearly 500 km (GRACE-FO) are covered.
The additional data ensures continuous electron density and TEC observations from multi-LEO satellites spanning a period of almost two full solar cycles. Having the overlaps between the different satellite missions, the constellation aspect achieved by the multi-mission coordination for monitoring ionospheric phenomena can be exploited. We will present both, climatological studies of TEC and electron density and short-term variations, that can only be accessed by constellations. By this, we will illustrate the consistency and sensitivity of the newly derived data set.
Among Space Weather effects, the degradation of air traffic communications and satellite-based navigation systems are the most notable. For this reason, it is of uttermost importance to understand the nature and origin of ionospheric irregularities that are at the base of the observed communication outages. Here we focus on polar cap patches (PCPs) that constitute a special class of ionospheric irregularities observed at very high latitudes in the F region. To this purpose, we use the so-called PCP flag, a Swarm L2 product that allows locating PCPs. We relate the presence of PCPs to the values of the first- and second-order scaling exponents estimated from Swarm A electron density fluctuations and to the values of the Rate Of change of electron Density Index (RODI).
The results of our analysis, covering a time interval of approximately 3.5 years since the 1st of July 2014, show that values of RODI and of the first- and second-order scaling exponents corresponding to measurements taken inside PCPs, are clearly different from those corresponding to measurements outside PCPs. Moreover, the values of the first- and second-order scaling exponents suggest the turbulent nature of PCPs.
This work is supported by Italian PNRA under contract PNRA18_00289-A “Space weather in Polar Ionosphere: the Role of Turbulence".
The Global Positioning System (GPS) Attitude, Positioning, and Profiling (GAP) instrument is one of eight components of the scientific instrument suite onboard the Swarm-E satellite (previously CASSIOPE/e-POP). The Swarm-E instrument suite was designed to primarily study the physical processes coupling the polar ionosphere to the solar wind and magnetosphere, and the ionospheric structure and dynamics associated with this coupling. The GAP instrument consists of three GPS antennas oriented towards spacecraft zenith and one antenna oriented in the anti-ram direction, along with associated GPS receivers. This configuration allows for both radio occultation and topside ionosphere measurements, which are collected at data rates of up to 100 Hz. The elliptical, polar orbit of Swarm-E and high data rate of GAP allows for unique radio occultation and topside ionosphere observations, which is particularly useful for polar regions where much of the ionosphere structure and dynamic behaviour associated with SW-M-I-T coupling are not well observed or understood.
The presentation will discuss recent reprocessing of GAP data and ongoing research projects employing the GAP dataset. Eight years of GAP data (starting in September 2013) have been reprocessed, with calibrated line-of-site total electron content (TEC) currently available on the epop-data website (https://epop.phys.ucalgary.ca/data/). Higher level products for topside vertical TEC and electron density profiles will be available in the near future. Topside TEC measurements of GAP are currently used to observe the topside electron content in the polar regions, including the statistical study of high altitude (>1000 km) topside TEC enhancements. Concurrent observations of the Imaging and Rapid-scanning ion Mass spectrometer (IRM) of Swarm-E may provide insight into possible plasma upflow/downflow associated with these enhancements. Also ongoing are statistical studies of ionospheric plasma structures with 100s of kilometer down to sub-kilometer spatial scales. This includes analysis of topside irregularities using the zenith-oriented GAP receivers, as well as observation of the vertical structure of irregularities using the GAP occultation receiver. Climatology of observed irregularities, including links to solar wind and geomagnetic activity levels will be discussed.
The electron temperature observations taken by the Swarm constellation often show spikes and/or time series characterized by fluctuations and very high values, well above the expected ionospheric background. Different “families” of such occurrences can be recognized: one family of spikes most likely constitutes an artifact due to a combination of instrumental and local environmental effects and it affects specific portions of orbits in particular conditions when the solar panels are illuminated by the Sun; another family of high temperature values is instead typical of high latitudes and nocturnal local times, often associated with very low values of the electron density. In this study, we aim at selecting and characterizing a number of events of this second family, looking also at other parameters measured by Swarm satellites at the same time, such as field-aligned currents density and local plasma velocity.
The Radio Receiver Instrument (RRI) on the Enhanced Polar Outflow Probe (e-POP; also known as Swarm-E) has been delivering high-quality and insightful measurements of natural and artificial radio emissions from low-Earth orbit since November 2013. RRI is a digital radio receiver which can operate between 10 Hz and 18 MHz, sampling at a rate of 62.5 kHz. To date, RRI has performed over a thousand measurements, the majority of which are divided between observations in the Very Low Frequency (3 – 30 kHz) and High Frequency (3 – 30 MHz) portions of the radio spectrum.
In this presentation, we will provide an update on RRI’s scientific activities. We will give a high-level overview of recently published results, measurement campaigns, and ongoing scientific efforts. In particular, we will discuss the methodology and outcomes of RRI’s HF eclipse observation campaign, which will take place in the weeks around the December 4, 2021, total solar eclipse in the southern hemisphere. In that campaign, RRI will target a ground-based HF transmitter located in Antarctica to study the effects of the eclipse on the coupled ionosphere-thermosphere system.
We will also discuss the progress on a multi-year RRI data analysis project to study HF scintillation at high latitudes in the Canadian sector. The major of RRI’s HF operations have been organized experiments between RRI the Super Dual Auroral Radar Network (SuperDARN) systems located at Saskatoon, Rankin Inlet, and Clyde River (all in Canada). The project goals are to use RRI data from the SuperDARN experiments to specify the nature of HF scintillation in the region, diagnose scintillation caused by ionospheric irregularities and distinguish it from scintillation resulting from HF radio propagation effects, identify geophysical phenomena responsible for HF scintillation, and ascertain the relationship (if any) between HF scintillation and backscatter measured by the SuperDARN systems.
Characterising the ionospheric electron density (Ne) and temperature (Te) is fundamental to study the physical and dynamical properties of the ionospheric plasma. Indeed, in a collisional inhomogeneous plasma crossed by electric and magnetic fields, plasma constituents densities and temperatures significantly affect the plasma distribution function f(r,v) in the phase space (r,v). The Langmuir Probes on board the European Space Agency Swarm satellites, providing in-situ simultaneous observations of both Ne and Te at 2-Hz rate, offer the valuable opportunity to investigate some properties of the topside ionospheric plasma in a very detailed way, thanks to the wide dataset currently available covering different spatial, diurnal, seasonal, and solar activity conditions. In this study, Ne and Te observations collected by Swarm satellites in the period 2014 - 2021 are used to highlight the main statistical properties of their correlation. Pearson correlation coefficient values are calculated and binned as a function of the magnetic Quasi-Dipole latitude and Magnetic Local Time coordinates, for different geophysical conditions, and the corresponding results are shown as maps.
The ionospheric irregularities, which are plasma density variations occurring on scale sizes ranging between a few meters and hundreds of kilometers, are one of the natural factors that affect electromagnetic signals propagating through the ionosphere. For this reason, they can contribute to the malfunctioning of Global Navigation Satellite Systems (GNSS) hindering their accuracy and reliability.
In the past, many studies related to plasma density irregularities were carried out using data recorded by ground-based instruments or instruments installed on board rockets and satellites. Recently, interesting results have been obtained by analyzing measurements from the European Space Agency Swarm constellation. Measurements of magnetic field and electron density, along the orbits of Swarm satellites, have been used to address the scaling properties of their fluctuations and to unveil some interesting features of ionospheric dynamics. These studies have demonstrated the existence of a class of plasma density irregularities characterized by both fluctuations and an energy spectrum supporting the role of turbulent processes at their origin. In addition, these studies also showed that this class is always associated with very high values of the Rate Of change of electron Density Index (RODI), which is a proxy of the fluctuations intensity characterizing the ionospheric medium. This implies that, among all the possible ionospheric irregularities, those due to turbulent processes seem to be always accompanied by plasma density variations stronger than those generated by other mechanisms.
Here, we use data recorded on board one of the three satellites of the Swarm constellation (namely, Swarm A) from 1st April 2014 to 31st March 2018 to assess the possible dependence of the Global Positioning System (GPS) signals loss of lock on the presence of this specific kind of ionospheric irregularities, and thereby to shed some light on the origin of one of the largest Space Weather effects on the GNSS. Using measurements recorded by the Swarm A Langmuir probes and GPS Precise Orbit Determination antennas, we study the scaling features of the electron density fluctuations through the structure function analysis simultaneously to the occurrence of loss of lock events. We find that the plasma density irregularities characterized by turbulent features and extremely high values of the RODI can lead to GPS loss of lock events. This result is extremely significant because it could pave the way for a possible prediction of such events, with a consequent mitigation of their adverse effects.
Ionospheric plasma dynamics at high latitude can play a key role in the understanding of the ionosphere-magnetosphere coupling processes. Whereas the statistical patterns of main ionospheric current systems and magnetic field-aligned currents have been widely studied, others current systems have not yet been established in detail. This is the case of pressure-gradient current, which can develop in the F region of the ionosphere. Such current is among the weaker ionospheric current systems arising from plasma pressure variations. Indeed, due to the coupling between geomagnetic field and plasma pressure gradient, electrons and ions drift in opposite directions, perpendicularly to the ambient magnetic field and the pressure gradient, generating an electric current whose intensity is of the order of a few nA/m2. This current is also called diamagnetic, because it produces a magnetic field which is oriented oppositely to the ambient magnetic field, causing its reduction inside the plasma. The magnetic reduction can be revealed in measurements made by low-Earth orbiting satellites when pass through ionospheric plasma regions where rapid changes in density occur. Anyway, identifying diamagnetic current by using its magnetic signature is not easy due to the weak intensity of generated magnetic perturbation, that is about 10,000 time smaller than the ambient geomagnetic field. That is why studies investigating this current are relatively recent, since high-accuracy satellite magnetic field measurements are available. Due to its origin, it can be revealed at both low and high latitudes and more generally in all those regions where the plasma pressure gradients are greatest. In the recent past, most studies have focused on low latitude, in the equatorial belt, where this phenomenon has been extensively studied. Conversely, only a few papers have focused on high latitudes where these currents although weak may pose additional challenge seen they seem to appear preferentially at the same geographic locations.
Here, using magnetic field, plasma density and electron temperature measurements recorded on board ESA Swarm constellation from April 2014 to March 2018, we reconstruct the flow pattern of the pressure-gradient current at high-latitude ionosphere in both hemispheres, and investigate its dependence on geomagnetic activity, seasonal and solar forcing drivers. The obtained results can be used to correct magnetic field measurements for diamagnetic current effect, to improve modern magnetic field model, other than understanding the impact of ionospheric irregularities on ionospheric dynamics at small-scale sizes of a few tens of kilometers.
Joule heating in the thermosphere occurs when electric fields transformed into
the local reference frame of the neutral gas are non-zero. This is also the
condition for having electric currents according to the well-known Ohm's law
for the ionosphere. A prominent cause of such current driving electric fields
is magnetosphere-ionosphere coupling at high latitudes. Also the atmospheric
dynamo is known to drive currents. For example, at mid-latitudes the Sq
currents are dominating in geomagnetically quiet periods. Sq is driven by tidal
winds, and so mechanical energy is converted to electriciy and ultimately to
heat because the ionosphere is a dissipative medium. Also gravity (buoyancy)
waves involve neutral motions and can constitute a dynamo. The electric
currents arising from the dynamo in turn affect the neutral dynamics via
Lorentz (jxB) forcing or, equivalently, ion drag.
To obtain a general description of the coupling between neutrals and the
ionospheric plasma we present an atmospheric dynamo equation where the
well-known Pedersen and Hall conductivities appear. The derivation is based on
a paper by Parker (1996). A dynamo effect occurs when ∇x(uxB)≠0, where u is the
neutral wind and B the magnetic field. Because the conductivity parallel to B
is orders of magnitudes higher than the Pedersen and Hall conductivities, the
condition is approximately that if uxB is not constant along magnetic field
lines, then dynamo electric fields drive currents. Since gravity waves are a
result of non-electrodynamic forces, generally their uxB varies along magnetic
field lines, and dynamo effects are produced when they propagate into the
dynamo regions of the lower thermosphere.
We estimate that the tidal Sq dynamo globally dissipates roughly a power of 2 GW
quasi-permanently. Also the electrodynamic dissipation by medium and small
scale gravity waves propagating from the mesosphere into the lower thermosphere
could be a significant source of heat. Unlike the current systems coupling the
magnetosphere and ionosphere which are observed by satellites like Swarm, the
currents of a gravity wave dynamo are confined to the lower thermosphere and
can only be observed with a very low orbiting satellite or sounding rockets.
Small-scale ionospheric structures are known to cause rapid fluctuations of the phase and amplitude of trans-ionospheric radio signals. For example, they can significantly degrade the performance of the Global Navigation Satellite System (GNSS) services, and under severe ionospheric conditions, these services can be totally unavailable. Therefore, it is of practical importance to forecast the severity of ionospheric irregularities. In the project - Forecasting Space Weather in the Arctic Region (FORSWAR) - we develop a new advanced space weather forecasting model for the satellite-based Positioning, Navigation and Timing (PNT) users in the Arctic with a focus in the Greenland area. The new model is based on optical flow image processing technique (Monte-Moreno et al., 2021), and it is able to predict the space weather condition in terms of rate of change of total electron content index (ROTI) in horizons of 15 minutes to 6 hours. The outputs of the model are validated through various GNSS positioning models (e.g., Single Point Positioning technique, Precise Point Positioning, and Real Time Kinematics) as well as the instantaneous ionospheric perturbation indices (Gradient Ionosphere index and the Sudden Ionospheric Disturbance index). In addition, the results are also cross-compared with the in-situ observations from Swarm satellites. The validation results suggest the good performance of the model in predicting the polar ionospheric irregularities. By incorporating the real-time GNSS data, this model is suitable for the implementation of the real-time space weather application in the polar region, and it can contribute to increased resilience to adverse space weather effects for PNT users.
Reference:
Monte-Moreno, E., Hernandez-Pajares, M., Yang, H., Rigo, A. G., Jin, Y., Høeg, P., Miloch W.J., Wielgosz P., Jarmołowski, W., Paziewski, J., Milanowska, B., Hoque, M., & Orus-Perez, R. (2021). Method for Forecasting Ionospheric Electron Content Fluctuations Based on the Optical Flow Algorithm. 10.1109/TGRS.2021.3126888, IEEE Transactions on Geoscience and Remote Sensing.
The ionosphere is a highly dynamical system that shows a complex behaviour due to its nonlinear coupling with the solar wind-magnetosphere system from above and with the lower atmosphere from below. Such a complexity of the ionospheric plasma manifests itself on a largely varying range of spatial and temporal scales. We investigate how the different scales of the in-situ electron density recorded at altitudes of Swarm constellation behave according to the various conditions of the geospace. This, with the goal of finding if the topside ionosphere reacts to an external perturbation as a whole or by activating some peculiar modes.
In this regard, the present study aims at quantifying the spatio-temporal variability in the topside ionosphere by leveraging on the Fast Iterative Filtering (FIF) technique. FIF can provide a very fine time-frequency representation, as it decomposes any nonstationary, nonlinear signals, like those provided by Langmuir probes onboard Swarm, into oscillating modes, called intrinsic mode components or functions (IMCs or IMFs), characterized by its specific frequency.
The instantaneous time-frequency representation is provided through the so-called “IMFogram” which illustrates the time development of the multi-scale processes. These IMFograms, similarly to spectrograms, have the potential to show the greater details of the scale sizes which intensify during the various phases of geomagnetic storms, as reported during the recent 2015 St. Patrick’s day storm. Scope of the study is also to illustrate how the analysis based on the use of FIF and IMFograms provide better performance with respect to similar study conducted via Fourier and discrete wavelet transform, by improving the scale resolution.
With this work, we also aim at supporting the development of advanced models of ionospheric plasma variability based on Swarm datasets.
This work is performed in the framework of the Swarm Variability of Ionospheric Plasma (Swarm-VIP) project, funded by ESA in the “Swarm+4D-Ionosphere” framework (ESA Contract No. 4000130562/20/I-DT).
The ionosphere is a dynamical system exhibiting nonlinear couplings with the other “spheres” characterizing the geospace environment. Such nonlinearity manifests also through the non-trivial, scale-dependent, time delays in the cause-effect chain characterizing the Solar Wind-Magnetosphere-Ionosphere coupling.
The present study uses the Intrinsic Mode Cross Correlation (IMXC): a novel scale-wise signal lag measurement. The method performance is evaluated first on known artificial signals and then applied to ionospheric data, including in situ electron density from Swarm constellation. The IMXC relies on non-linear non-stationary signal decomposition provided by the novel Multivariate Fast Iterative Filtering (MvFIF) technique, which identifies the common scales embedded in the signals. The lags are then obtained scale-wise, enabling the identification of the lag dependence on the involved spatio/temporal scales for the artificial data set (even in presence of high levels of noise), and to estimate them in a real life signal. The lags obtained can separate the scales on which coupling inherently occurs according to the physical reasoning from scales related only to internal fluctuations. This can pave the way to future uses of this technique in contexts in which the causation chain can be hidden in a complex, multiscale coupling of the investigated features.
As the first real-life scenario assuming cause-effect relationship, we use the closely-separated measurements of the European Space Agency’s Swarm Alpha (A) and Charlie (C) satellites with identical Langmuir probe instruments sampling the ionospheric plasma density in the topside ionosphere with latitudinal orbital separation representing a lag of about 8.8 s between the two satellites. Examples of additional applications to ionospheric science are also reported to demonstrate the usability of the technique in the Space Weather context.
This work is performed within the Swarm Variability of Ionospheric Plasma (Swarm-VIP) project, funded by ESA in the “Swarm+4D-Ionosphere” framework (ESA Contract No. 4000130562/20/I-DT).
As enhancement of the repertoire of operational services for the ESA Space Safety Programme, we develop for the Expert Service Center of Ionospheric Weather a novel forecasting model called SODA (Satellite Orbit DecAy). The service development is carried out in a joint project between the University of Graz and the Graz University of Technology and deals with the prediction of thermospheric variations and the subsequent effects on low Earth orbiting satellites (LEOs).
Geomagnetic storms occur rather consistently in accordance with the 11-year solar cycle and have the capability to trigger atmospheric disturbances and subsequently influence the trajectories of Earth orbiting satellites. The strongest disturbances of the space environment are primarily caused by coronal mass ejections (CMEs). To receive information about the magnitude of the Earth’s upper atmosphere response due to such solar events we calculate thermospheric densities based on scientific data, such as kinematic orbit information and accelerometer measurements. Depending on the degree of the density variation during a CME it is possible to estimate the occurring satellite orbit decay. The key element of the SODA is now to develop a forecasting model in order to predict the expected impact of solar events on satellite missions like Swarm or GRACE-FO. Even though, these missions are orbiting at the upper boundary of the Earth’s thermosphere, severe CMEs may trigger orbit decays in the order of several tens of meters. For LEO satellites at lower altitudes the effect of a single event may even exceed the 100m level. The forecasting tool is based on a joint analysis and evaluation of solar wind plasma and magnetic field measurements at L1 from the ACE and DSCOVR satellites as well as thermospheric neutral mass densities. By taking into consideration the varying propagation speeds of CME’s and the response time of the thermosphere, the lead time for the start of the atmospheric perturbation will be up to several hours. In this contribution we present the latest scientific developments within SODA and show the current status of the online presentation of the envisaged forecasting tool.
Auroral particle precipitation potentially plays a main role in ionospheric plasma structuring. The impact of particle precipitation on plasma structuring are investigated using multi point measurements from scintillation receivers and all sky imagers on Svalbard. This provides us with the unique possibility of studying the auroral dynamics in a spatial and temporal evolution.
We consider three substorm events to investigate how auroral forms impact on transionospheric radio waves. It is observed that elevated phase scintillation indices correspond best to the spatial and temporal evolution of auroral forms when both projected to the estimated green emissions altitude (150 km). This suggests that plasma structuring in the ionospheric E-region is an important driver for phase scintillations.
We demonstrate that plasma structuring impacting the GNSS signals is largest at the edges of the auroral forms. Studying an arc in detail, only poleward edges are associated with elevated phase scintillation indices, whereas for auroral spiral and band the structuring is attributed to all boundaries. There is a time delay (1-2 min) shown for the temporal evolution of aurora (e.g., commencement and fading of auroral activity) and elevated phase scintillation index measurements. This can be due to the intense influx of particles, which increase the plasma density and cause the recombination to carry on longer, which may lead to a memory effect. The irregularities and instabilities causing the elevated phase scintillation indices especially in the E-region may be due to e.g., field-aligned currents, Kelvin-Helmholtz instability or Farley-Buneman instability. The auroral fine structure and forms may be controlled by kinetic instabilities, such as Alvén waves, acoustic waves. Th e nature of the effects is studied using the ionospheric-free linear combination to understand whether this is a refractive or diffractive effect. This study can contribute to the development of models of ionospheric plasma irregularities and related space weather effects in the polar regions.
The ESA’s Swarm satellites (launched in November 2013) are equipped with accelerometers and Langmuir probes, which provide the opportunity to observe thermosphere and ionosphere disturbances simultaneously. This unique feature is explored here through a novel ensemble Kalman filter (EnKF)-based calibration and data assimilation (C/DA) technique to tune empirical or physics-based models and improve their now-casting and forecasting skills. The advantage of C/DA is that not only updates models states, but also it calibrates its key parameters, where the latter can be applied to estimate the global and multi-level thermospheric and ionospheric variables. Therefore, the spatial coverage of these estimates is not limited to the satellites’ ground-track coverage. In this study, the C/DA technique is applied on the NRLMSISE-00, which is an empirical model of thermosphere, using the thermospheric neutral density (TND) estimates derived from Swarm satellites and the re-calibrated model is called C/DA-NRLMSISE-00. Then, to find the coupling (or ion-neutral interactions) between thermosphere and ionosphere system, the coupled physics-based model of TIE-GCM is run by replacing the thermospheric constituents such as O2, O1, He and neutral temperature from C/DA-NRLMSISE-00 in the primary history files of TIE-GCM. Then, Swarm-derived Electron densities are used as assimilated observations into TIE-GCM to make the use of directly observed ionosphere variables. In order to find the impact of purposed method on forecasting thermosphere-ionosphere variables, it is essential to validate whether the TIE-GCM after data assimilation, is named here as ‘TIE-GCM-DA’, can improve the thermosphere-ionosphere parameters that were not used in the C/DA of NRLMSISE-00 and DA of TIE-GCM procedures. Thus, here, the TND estimates from TIE-GCM-DA are comapred against GRACE and GRACE-FO measurements, and the estimates of electron density and total electron content are evaluated against independent radio occultation and GNSS measurements. The numerical results indicate that indeed the C/DA is effective for short-term global forecasting and can be explored in operational studies.
The polar ionosphere is littered with plasma density structures on scales from hundreds of kilometres down to several meters. It is believed that this structuring is primarily driven by energy input from the magnetosphere as a result of the large-scale magnetosphere/solar wind coupling. The study of small-scale (sub-kilometre) plasma density structures in the ionosphere is important because they can severely impact the quality of trans-ionospheric radio waves such as those used in global navigation satellite systems (GNSS). Here we present results from the multi-needle Langmuir probe (m-NLP) system. Typically, Langmuir probes operate by sweeping through a range of bias voltages in order to derive the plasma density, a process that takes time and hence limits the temporal resolution to a few Hz. However, the m-NLP operates with fixed bias voltages, such that the plasma density can be sampled at several kHz, providing a spatial resolution finer than the ion gyroradius at orbital speeds. In particular, we present results from sub-kilometre plasma density structuring in the polar cusp region, and its relation to GNSS signal scintillations. We study the connection both based on case studies, present statistics, and also employ models based on the idea of the ionosphere as a phase screen. We show that the in situ plasma density measured can be related to scintillation measurements on the ground.
Swarm satellites mission is actively used to conduct various studies of the ionosphere, focusing on such aspects as the electric and magnetic fields or plasma temperature, structuring and irregularities. We use a global product based on the Swarm satellite measurements that characterizes ionospheric irregularities and fluctuations. The IPIR (Ionospheric Plasma IRregularities product) provides characteristics of plasma density structures in the ionosphere, of plasma irregularities in terms of their amplitudes, gradients and spatial scales and assigns them to geomagnetic regions. Ionospheric irregularities and fluctuations are often the cause increases the error in position, velocity, time determination based on Global Navigation Satellite Systems (GNSS), which signals pass through the ionosphere. So IPIR also provides an indication, in the form of a numerical value index, on their severity for the integrity of trans-ionospheric radio signals and hence the accuracy of GNSS precise positioning.
In this study, we are comparing two datasets from Swarm satellites (with 1-second resolution) and from ground-based scintillation receivers (with 1-minute resolution). First, we need to find time intervals when the Swarm satellites pass over the field-of-view of the ground-based GPS receiver. To calculate these passes, a geometry with an elevation angle of 30° above the receiver was used. Second, to compare the characteristics of electron density fluctuations from Swarm with ground-based scintillation data, we performed an azimuthal selection of the GNSS data according to Swarm satellite fly. Only those GNSS satellites are taken into account that are near the position of the Swarm satellite (azimuth ±10°). We provide validations of the IPIR product against the ground-based measurements, focusing on GPS TEC and scintillation data in low and high-latitudes regions in different longitudinal sectors. We calculate median, mean, maximum and standard deviation of parameter’s values for both datasets for each conjunction point. We observe a weak trend of stronger scintillations with an increasing IPIR index, where the IPIR index presents a product of amplitudes and temporal variations in plasma densities.
In this presentation, European MUF(3000) nowcasting and forecasting products in high frequency communications (HF COM) domain developed by INGV are presented. These are developed in form of maps over Europe of MUF(3000) and its ratio with respect to a proper background. The maps have different extension and spatial resolution and are designed to immediately detect regions of post-storm depression for Space Weather (SW) applications.
The nowcasting products are based on real-time maps updated every quarter of an hour and covering the European sector with extension 12°W-45°E; 32°N-72°N. The mapping procedure makes use of the available real-time ionosonde measurements in different locations, and the ordinary kriging technique for spatial interpolation in order to upgrade IRI-CCIR-based background maps in a regular grid with fine spatial resolution. The forecasting procedure product consists in real-time maps updated every hour and covering a geographic area extending 20°N-80°N; 40°W-100E°. The mapping procedure makes use of both historical and real-time available hourly foF2 observations, forecasted 3-hour ap indices as driving input parameter from NOAA (National Oceanic and Atmospheric Administration - USA), and effective ionospheric monthly T from BOM (Bureau of Meteorology – Australia) indices to specify the background level of the available real-time ionosonde measurements in different locations. Local prediction models have been created for each European ionospheric station, and the results are extended over the whole geographic area applying a multiquadratic technique.
Several tests were conducted comparing model predictions and actual observations to evaluate the performance of the methods during some Space Weather events, relevant to users. The results obtained are summarized here and briefly discussed.
The HF products are part of the INGV contribution to the SWESNET (Space Weather Service Network) project initiated by ESA, and are provided operationally since November 2019 to ICAO in the frame of PECASUS consortium activities for the mitigation of SW effects for civil aviation purposes.
The space geodetic techniques operating in the radio frequency range, such as Very Long Baseline Interferometry (VLBI) and Global Navigation Satellite System (GNSS), are sensitive to the Total Electron Content (TEC) of the ionosphere. Moreover, the precision of the techniques depends on the quality of the estimated values of the ionosphere. For accurate positioning, the GNSS requires good prediction of the TEC values. Inaccurate estimation of the TEC in VLBI also degrades the accuracy of the estimated geodetic parameters, such as ground based antenna’s coordinates, Earth Orientation Parameters, etc. There are a number of global TEC models based on the GNSS observations, designed to describe the global conditions of ionosphere in terms of TEC. We have conducted a comparative study of the two selected global TEC maps with the results from the observations of the VLBI Global Observing System (VGOS). VGOS network has been established recently and it is continuously growing. The estimated differential TEC (dTEC) from VGOS data has high precision with the formal error of dTEC of about 0.01-0.2 TECU. It can be used in evaluation of the global TEC maps, as well as an additional data source for the further improvement of the TEC map models.
Precision of the estimated dTEC with VGOS has been improved considerably compared to the traditional geodetic dual-band VLBI observations. We have compared VGOS ionosphere product with the dTEC calculated using global ionosphere TEC maps. For analysis, we selected two TEC global models, CODE GIM and Neustrelitz TEC Model Global (NTCM-GL). The comparison was performed for the VGOS observations made in 2019-2020. We found a good agreement between VGOS dTEC and dTEC obtained using global TEC maps, however an offset between two datasets is detected. The comparison also reveals weakness of the global TEC models in some locations, such as remote islands, where the number and distribution of the ground based GNSS antennas are limited. The VGOS data can be considered as an additional information source and, hence, they can be used for the further improvement of the global TEC models.
Ground-based indices, such as the Dst, ap and AE, have been used for decades to describe the interplay of the terrestrial magnetosphere with the solar wind and provide quantifiable indications of the state of geomagnetic activity in general. These indices have been traditionally derived from ground-based observations from magnetometer stations all around the Earth. In the last 7 years though, the highly successful satellite mission Swarm has provided the scientific community with an abundance of high quality magnetic measurements at Low Earth Orbit (LEO), which can be used to produce the space-based counterparts of these indices, such the Swarm-Dst, Swarm-ap and Swarm-AE indices. In this work, we present the first results from this endeavour, with comparisons against traditionally used parameters. We postulate on the possible usefulness of these Swarm-based products for a more accurate monitoring of the dynamics of the magnetosphere and thus, for providing a better diagnosis of space weather conditions.
Lightning whistler trains consisting of more than twenty individual lightning whistlers were recorded at the Kannuslehto ground station in Finland (67.74N, 26.27E; L = 5.5) on 7 January 2017 from 7:35 to 8:35 UT. Shorter lightning whistler trains appeared from 5:44 to 6:27 UT. Using the World Wide Lightning Location Network (WWLLN) data, we have identified causative lightning strokes for the observed whistler trains and found them to occur during a winter thunderstorm, which accompanied the arrival of the cyclone Axel to the Norwegian coast. Corresponding very low frequency (VLF) sferics were recorded at the Kannuslehto station in Finland but also at the LSBB (Laboratoire Souterrain à Bas Bruit) receiving station in Southern France.
Lightning whistlers were trapped in field-aligned density ducts, and each whistler bounced for 2-4 minutes during the interval from 7:35 to 8:35, when the energy of causative lightning strokes was on average 168 kJ. The whistler trains observed from 5:44 to 6:37 were shorter, lasting for 30-90 s, and were triggered by weaker strokes with an average energy of 39 kJ. We use the whistler inversion method in order to obtain plasmaspheric electron densities and McIlwain’s L parameter from the measured whistler data. We have found that the duct was composed of many paths spread from L=3.4 to 4.4, corresponding to a latitudinal range of 60°- 65°N. Strong lightning strokes occurred between 62.6° - 63.4°N, well within the latitudinal range of the duct. We conclude that observations of such long whistler echo trains are only possible when a long-lasting duct is formed, and at the same time, a thunderstorm below the ionospheric end of the duct produces very energetic lightning. These strokes then deliver enough energy to the magnetosphere to keep the whistlers bouncing in the duct for a long time.
ESA's SMOS mission was originally conceived to use L-band interferometry to map the concentration of salt in the oceans and the moisture of the soil. However, not only Earth, but also the Sun appears in the wide field of view of its 69 1.4 GHz receivers, making it one of the main source of noise in the image. Here we show how with the proper data processing it is possible to use the solar noise affecting SMOS observations to monitor the Sun for geoeffective coronal mass ejections and for solar radio bursts that could affect systems based on L-band radio signals.
We have found that SMOS detects different types of solar signals, including the progress of the 11-year activity cycle, the thermal emission from solar flares and solar radio bursts. Furthermore, we note that SMOS detects radio bursts only during flares associated with CMEs and that the size of the 1.4 GHz radio bursts correlates well with the speed, angular width and kinetic energy of these CMEs. This, together with the low-resolution solar images that SMOS is able to compute, it is therefore possible to make an early assessment of both the importance and the direction of the associated CMEs.
Moreover, systems based on radio frequencies are known to be affected by the kind of solar radio bursts that SMOS can detect. But despite the importance of nowcasting this radio bursts as a source of radio interferences, near real-time observations are still not easily available. The situation is not much better for post-event analyses, as solar radio observations usually do not include polarization. SMOS can be of use also for this purpose as it has been operating with full polarization since 2010 and provides data in near real-time. It and can therefore monitor interferences affecting navigation satellites (GPS, Galileo, GLONASS...), L-band air traffic control radars and radio communications.
The data in this study is from the SWADO (Space Weather for Arctic Defence Operations) network, consisting of seven GISTM (GNSS Ionospheric Scintillation and TEC Monitor) stations, which can utilize both Gallileo, GPS, GLONASS, and BeiDou. The stations are distributed along the coast of Greenland in Thule, Upernavik, Kangerlussuaq, Qaqortoq, Kulusuk, Scoresbysund, and Station Nord. This creates a chain of receivers along the west coast of Greenland, that follows one geomagnetic longitude. The stations on the east coast are placed to increase the data coverage above Greenland, and to be on geomagnetic latitudes corresponding to stations on the west coast. Due to this design, the SWADO network can be used to investigate the evolution of ionospheric GNSS scintillation events in time and space.
The primary type of scintillation in the Arctic is phase scintillation, and the σ_ϕ index is therefore used in this study. However, this index is based on GNSS raw data with a sampling frequency of 50-100 Hz. To increase the spatial data coverage the ROTI (Rate of TEC Index) was also considered, since ROTI can be based on GNSS data with a 1 Hz sampling frequency. Giving the possibility to include selected geodetic GNSS receives from GNET. ROTI indices based on 1 Hz data can not catch the same small-scale variations as the σ_ϕ index based on 50-100 Hz data but provides additional information for spatial and temporal interpolation.
This study can provide key information for mapping and short-term prediction of ionospheric GNSS scintillation events. This is crucial for the users of GNSS positioning and navigation in the Arctic where scintillation poses a significant threat since it can degrade the signal considerably, even to a degree where GNSS positioning is not possible. In the Arctic the satellite geometry already poses a challenge due to the high latitudes, which makes GNSS users more vulnerable to a loss of satellite signals on account of scintillation.
The SWADO network was established in the fall of 2021 and the study is therefore representative of a period with increasing solar activity since we are currently moving towards a solar maximum. Mapping and short-term predictions of GNSS disturbances are becoming more relevant and providing integrity information for Arctic GNSS users will become essential in the coming years.
Heliophysics, the science of understanding the Sun and its interaction with the Earth and the solar system, has a large and active international community, with significant expertise and heritage in the European Space Agency and Europe. Several ESA directorates have activities directly connected with this topic, including ongoing and/ or planned missions and instrumentation, comprising a ESA Heliophysics observatory: The Directorate of Science with Cluster, Solar Orbiter, SMILE and the Heliophysics archive; The Directorate of Earth Observation with Swarm and other Earth Explorer missions (including EE 10 candidate Daedalus); The Directorate of Operations with the L5 mission, Distributed Space Weather Sensor System (D3S) and the Space Weather Service Network; The Directorate of Human and Robotic Exploration with many ISS and LOP-Gateway payloads and the Directorate of Technology, Engineering & Quality with expertise in developing instrumentation and models for measuring and simulating environments throughout the heliosphere. The ESA Heliophysics Working group was formed to optimize interactions and to act as a focus for discussion inside ESA of the scientific interests of the Heliophysics community, including the European ground-based community and data archiving activities.
This paper will provide a brief introduction and description to the newly formed ESA Heliophysics working group, some of its planned activities (including work on the LOP-Gateway) and highlighting the benefits by using the continuing successful collaboration between Swarm and Cluster as a leitmotif.
The ionosphere is a highly complex plasma containing electron density structures with a wide range of spatial scale sizes. Large-scale structures with horizontal extents of tens to hundreds of km exhibit variation with time of day, season, solar cycle, geomagnetic activity, solar wind conditions, and location. Whilst the processes driving these structures are well understood, the relative importance of these driving processes is a fundamental, unanswered question. These large-scale structures can also cause smaller-scale irregularities that arise due to instability processes and which can disrupt trans-ionospheric radio signals, including those used by Global Navigation Satellite Systems (GNSS). Ionospheric effects pose a substantial threat to the integrity, availability and accuracy of GNSS services. Strategies to predict the occurrence of plasma structures are therefore urgently needed.
Swarm is ESA's first constellation mission for Earth Observation (EO). It initially consisted of three identical satellites (Swarm A, Swarm B, and Swarm C), which were launched into Low Earth Orbit (LEO) in 2013. The configuration of the Swarm satellites, their near-polar orbits and the data products developed, enable studies of the spatial variability of the ionosphere at multiple scale sizes. The technique of Generalised Linear Modelling is used to identify the dominant driving processes of large-scale structures in the ionosphere at low, middle, auroral and polar latitudes. The statistical relationships between the ionospheric structures and the driving processes are determined in each region and the variations between regions are discussed, with a particular focus on the European sector.
This work is within the framework of the Swarm Variability of Ionospheric Plasma (Swarm-VIP) project, funded by ESA in the “Swarm+4D-Ionosphere” framework (ESA Contract No. 4000130562/20/I-DT).
The aurora can be used as a direct way to observe particles precipitating into the ionosphere.
The main drivers behind this particle precipitation are geomagnetic substorms which can be divided into their three phases growth, expansion and recovery.
Energy is stored by coupling between the solar wind, interplanetary magnetic field and magnetosphere.
This energy is subsequently released in the Dungey cycle after which the magnetosphere returns to normal conditions.
Two of the easily observable differences in the aurora are their shape and latitude as well as a measurable difference in the Earth's magnetic field that occurs during a substorm.
For several decades all sky imagers have been placed in regions in Scandinavia, North America and Antarctica and have been taking images of the night sky every few seconds.
At the moment several million images are taken each year, and due to the large amount of images, only a fraction can be manually analysed.
Using transfer learning use a classifier based on a two step process where a pretrained neural network feature extractor transforms the images in a machine-readable numerical feature vector.
These features are later used for classification but have been shown to contain essential physical information embedded in the images.
Classification and clustering allows us to perform a large scale statistical analysis of the development of the aurora over several years.
Combining images with their respective measurements of the interplanetary magnetic field and locally measured disturbance in the Earth's magnetic field, we are able to query for certain conditions in a set of data spanning several hundred thousand of images taken in the last decade.
We are able to present a statistical analysis of how the aurora behaves depending on certain space weather conditions and with this knowledge open up new possibilities for the research and prediction of space weather.
The electron density controls all the ionospheric effects on propagating radio signals. Ionospheric imaging is a helpful technique for radio systems applications and understanding ionospheric electron density distributions. Total electron content (TEC) estimates from global navigation satellite systems (GNSS) have been extensively used to study characteristics of equatorial plasma bubbles (EPB). As they propagate across different altitudes of the ionosphere, GNSS signals allow a three-dimensional representation of the ionosphere using tomographic reconstruction techniques. Despite the progress made in recent years by, for instance, improving the ionospheric tomographic techniques or applying constrained methods, the incomplete geometrical coverage and the limited viewing angle of GNSS signals are still relevant challenges in the tomographic reconstruction of ionospheric electron density irregularities. In this study, we propose a method to identify geo-location of scintillation-inducing irregularities by producing quasi-tomographic images of the ionosphere obtained from various ionospheric GPS indices. The high-sample-rate GAP-O (GPS Attitude, Positioning, and Profiling - Occultation) receiver onboard the CASSIOPE/Swarm E satellite is mainly used for radio occultation measurements and its antenna is normally pointed in the horizontal direction. During several campaigns when flying in the equatorial region during post-sunset hours, we re-oriented Swarm E for short periods to direct the GAP-O receiver antenna to the vertical direction. Using a new quasi-tomographic technique, we reconstructed maps of EPB’s and equatorial ionospheric irregularities. The elliptical orbit of the satellite enables sampling at different altitudes. Our TEC maps detect the Appleton anomaly, which serves as a validation of the technique which we then generalize to map regions of intense, small-scale irregularities. In addition, according to our horizontal reconstructed maps, in-situ irregularities detected by the IRM (Ion Mass Spectrometer) instrument onboard CASSIOPE/Swarm E, were detected primarily when the satellite was passing close to the edge of large-scale plasma depletions, which were also associated with a large standard deviation of the rate of change of TEC (ROTI) extending to both sides in the zonal (east-west) direction.
In this study we investigate the variations of the hourly observations at the Ionospheric Observatory of Rome (41.82° N, 12.51° E) during the minimum of activity of the last solar cycles. In particular, the values of the critical frequency foF2 manually scaled from the ionograms recorded by the AIS-INGV ionosonde during the years 2007-2009 (between solar cycles 23 and 24) and 2018-2020 (between 24 and 25 ones) are considered. Each hourly deviation of foF2 greater than ± 15% with respect to a background level defined by 27-days running median values is here considered anomalous, defining positive and negative anomalies depending on the sign of the corresponding variation. The dependence of these strong variations on geomagnetic activity has been accurately investigated on the base of the ap geomagnetic index values within the previous 24 hours, according to the NOAA scales (from G0 to G5), and defining an additional class for ap≤7, considered representative of actually quiet conditions. Besides, the occurrence time of the anomalies has been also investigated to discriminate those originated during daytime or nighttime hours. The top level of geomagnetic activity reached during all the years was G2, except for 2018 when G3 level has been reached. A comparable number of both negative and positive ionospheric foF2 anomalies during the two solar minima were found, with total negative anomalies in a smaller number than the positive ones, as expected under low solar activity conditions. Some other main findings of this work are the small number of daytime negative foF2 anomalies, and the confirmation of the existence of two types of positive F2 layer disturbances, characterised by different morphologies and different underlying physical processes. A detailed analysis of some specific cases allows the definition of possible scenarios for the explanation of the mechanisms behind the generation of the foF2 anomalies.
Solar, auroral, and radiation belt electrons enter the atmosphere at polar regions leading to ionization and affecting its chemistry. Climate models with interactive chemistry in the upper atmosphere, such as WACCM-X or EDITh, usually parametrize this ionization and calculate the related changes in chemistry based on satellite particle measurements. Precise measurements of the particle and energy influx into the upper atmosphere are difficult because they vary substantially in location and time. Widely used particle data are derived from the POES and GOES satellite measurements which provide electron and proton spectra. These satellites provide in-situ measurements of the particle populations at the satellite altitude, but require interpolation and modelling to infer the actual input into the upper atmosphere.
Here we use the electron energy and flux data products from the Special Sensor Ultraviolet Spectrographic Imager (SSUSI) instruments on board the Defense Meteorological Satellite Program (DMSP) satellites. This formation of currently three operating satellites observes both auroral zones in the far UV from (115--180 nm) with a 3000 km wide swath and 10 x 10 km (nadir) pixel resolution during each orbit. From the N2 LBH emissions, the precipitating electron energies and fluxes are inferred in the range from 2 keV to 20 keV. We use these observed electron energies and fluxes to calculate auroral ionization rates in the lower thermosphere (≈ 90–150 km), which have been validated against ground-based electron density measurements from EISCAT. We present an empirical model of these ionization rates derived for the entire satellite operating time and sorted according to magnetic local time and geomagnetic latitude and longitude. The model is based on geomagnetic and solar flux indices, and a sophisticated noise model is used to account for residual noise correlations. The model will be particularly targeted for use in climate models that include the upper atmosphere, such as the aforementioned WACCM-X or EDITh models. Further applications include the derived conductances in the auroral region, as well as modelling and forecasting E-region disturbances related to Space Weather.
The use of the “G” descriptive letter in the ionogram interpretation is reserved for the condition in which ionospheric F1-layer critical frequency foF1 exceeds the one of the F2-layer (foF2), the latter being the layer typically with maximum electron concentration.
The ionospheric G-condition events observed with Millstone Hill Incoherent Scatter Radar (ISR) on September 11, 12, 13, 2005; June 13, 2005, and July 15, 2012 are studied. A set of the main aeronomic parameters responsible for the formation of daytime mid-latitude F-layer using of the earlier developed method to extract thermospheric parameters from ionospheric observations are used. Thermospheric parameters are retrieved from ionospheric observations using the earlier developed method (Perrone and Mikhailov, JGR, 2018, DOI: 10.1029/2018JA025762).
The method retrives thermospheric parameters, oxygen concentration ([O]), molecular nitrogen ([N2]), molecular oxygen, ([O2]), esospheric temperature (Tex), EUV solar total flux, and vertical plasma drift (W) from ionospheric observations.
To retrieve thermospheric parameters from ionospheric observations observed noontime foF2 and plasma frequencies at 180 km height, f180 are required for (10,11,12,13,14) LT; both may be taken from Millstone Hill Digisonde observations. The method is designed to work with routine ground-based ionosonde observations and it cannot be applied during G-conditions, when F2-layer maximum is not seen. Therefore, the method was changed to deal with the whole Ne(h) profiles available from ISR observations. In addition to five f180 values now we use observed Ne at the upper boundary (normally 450-500 km), and a couple of points on the Ne(h) profile controlling its shape.
CHAllenging Minisatellite Payload (CHAMP)/STAR and Gravity field and steady state Ocean Circulation Explorer GOCE neutral gas density observations were included into the retrieval process. It was found that G-condition days were distinguished by enhanced exospheric temperature and decreased by ~ 2 times of the column atomic oxygen abundance in a comparison to quiet reference days, the molecular nitrogen column abundance being practically unchanged. The inferred upward plasma drift corresponds to strong ~ 90 m/s equatorward thermospheric wind presumably related to strong auroral heating on G-condition days (Perrone et al., Remote Sens., 2021, https://doi.org/10.3390/rs13173440).
The European community is more and more involved in building a common data base to share knowledge on space weather domain. The Space Weather network (SWESNET) develops, manages and distributes high quality scientific observations, results and models of interest for space weather applications. In the frame of this project, we developed a local heliospheric data centre to host two scientific tools and to generate related scientific data products. The Coronal Mass Ejection (CME) propagation prediction tool makes use of coronagraph and in-situ data from L1, to forecast the CME evolution. The magnetic effectiveness tool makes use of in-situ data (both from L1 and planetary missions) to compute magnetic helicity and forecast how the probes are magnetically connected to the solar corona. These data products will be then made available to the SWESNET community. The local heliospheric data centre developed in ALTEC, provides extensive datasets and the possibility of designing, implementing, and validating the algorithms dedicated to space weather forecasting purposes. The data management applied at the data centre, is designed to deal with different data products, data formats and availability, by taking into account the real-time constrains which are essentials to provide forecasting services.
The Krishna basin is the fifth largest river basin in India shared between four states in India, the largest of them is Karnataka. The States have full authority over water resources within their boundaries, good cooperation between these four states over the water resources in the Krishna river basin is essential for good governance. For the basin, southwest monsoon provides most of the rainfall in the period June to October (90% of the yearly rainfall). Agricultural areas cover about 76% of the total surface of the basin. With a growing population (currently more than 66 million), growing demand for food production and the intense water resources development, the basin is under severe environmental pressure. Water Accounting Plus (WA+) framework developed by IHE Delft and its partners, FAO and IWMI is applied to analyse the water resources conditions of three sub-basins of Krishna located in Karnataka state: Middle Krishna (K2), Ghatprabha (K3), and Malprabha (K4). The irrigated areas in the three sub-basins covers about 41% of the geographical area. The analysed period is the hydrological years from 2010-2011 to 2017-2018 and results are provided as spatial monthly and yearly maps, water accounting sheets and indicators (monthly and yearly scale). Inputs for the study are Remote Sensing (RS) global open-access datasets and in-situ measurement provided by Advanced Centre for Integrated Water Resources Management, Government of Karnataka, for validation purposes. This paper describes the Remote Sensing data analysis and data selection, the methodology used for the study, presents the results and provides recommendations for water resources management in the basin including the Irrigation water use. Several RS datasets are available which estimate precipitation (P) and evapotranspiration (ET). In this study the best datasets are selected based on: (a) inter-comparison of products, (b) validation using in-situ measurement, (c) yearly water balance assessment and comparison with in-situ discharge measurements, (d) availability of data in recent years. CHIRPS dataset was selected for precipitation measurements and SSEBop for actual ET estimates. RS-ET data shows a less pronounced month-to-month and seasonal variability than precipitation, with higher ET values in the monsoon season where water and energy are abundant and lower in the winter months. Reservoirs have the highest total ET (up to 1,500 mm/yr) followed by other water bodies and irrigated areas. A large portion of the three sub-basins is covered by fallow land which shows extremely low ET values (100-200 mm/yr). These low values seems unrealistic for this climatic zone where rainfall reaches up to 600 mm/yr. The upstream mountainous areas of the three sub-basins generate most of the runoff (up to about 1,000 mm/yr) while the agricultural areas and the water bodies are net consumer (up to 1,000 mm/yr).
The Krishna basin is an interstate river system that flows through the states Maharashtra (26% of the area), Karnataka (44%), and Andhra Pradesh (30%). Most of the Krishna basin about 76% is covered by agricultural area. Irrigated areas have expanded rapidly in the past 50 years causing a significant decrease in discharge to the sea. The Krishna basin is facing growing challenges in satisfying the growing water demands and conflicts are arising because of competing demands. A detailed water productivity assessment in one of the three irrigation schemes located in the sub-basins, namely Narayanpur Left Bank Canal (NLBC) command area was taken up for the study. The NLBC network which comprises an irrigated area of 451,703 ha as per the official statistics. The study was carried out for the Kharif seasons (July till December/January) in three years from 2017 to 2019. The main crops cultivated include sugarcane, cotton, paddy, sorghum, beans, and maize, among others. Analyses were also done with the aim to look at underlying causes behind specific spatial trends of yield and water productivity and report the findings. Specifically, following steps were undertaken: 1. Analysis on variability of Biomass, yield, and WP in NLBC exploring the possible correlations with rainfall and land use. 2. Analysis on irrigation performance in terms of water availability to distributary canal service area by computing uniformity, water deficit and its impact on crop yield. 3. Analysis on variability of ETa, Relative Water Deficit (RWD) and biomass in NLBC exploring the possible correlations with distance from canals. Ground data was collected using Open Data Kit (ODK). The field data collection was carried out in the NLBC scheme in December 2019 and January 2020. All the Landsat 8 data acquired between 1 January 2017 and 31 December 2019 were processed to estimate ETa and AGBP. The entire NLBC is covered in 3 Landsat tiles. A total of 156 Landsat 8 scenes were processed. Although the target season of the study is Kharif which is from July to December, we processed all the images from 1 January 2017 to 31 December 2019 to provide the continuity in the temporal moving window over months/seasons for the gap-filling step that comes at a later stage. All the spectral bands including the thermal bands were processed in preparation to apply the SEBAL algorithm. The Landsat data preprocessing was performed at a spatial resolution of 30 m, resulting in a total of 7.5 million pixels for each map covering the entire NLBC. A total of 312 maps each with 7.5 million pixels were processed, 2.3 billion pixel-date, to develop seasonal ETa and Biomass maps. We used the Landsat Collection 1 Level-1 data belonging to the Tier 1(T1) inventory. For topography and elevation, 30 m data from NASA’s Shuttle Radar Topography Mission (SRTM), acquired from USGS EROS Data Center was used. All the Landsat data were downloaded from Google cloud public storage. The data acquisition was automated using the gsutil open-source python library. The latest available land use maps for two crop years 2017-18 and 2018-19 were obtained from the National Remote Sensing Centre (NRSC) India. The NRSC map is of 60 m spatial resolution and classifies 17 land use types for the 2017-18 cropping year. This map was used to inform the crop type mapping process in the project. For crop type mapping for the year 2019, the Sentinel 2A/B multi-spectral and Sentinel 1 Synthetic Aperture Radar (SAR) data available from July 2019 to January 2020 were used. Due to the extensive cloud coverage during the period, we could only use atmospherically corrected Sentinel 2 data from November 2020 and January 2021 for the crop type mapping. Further, Sentinel 1 SAR data, a median filter was applied per month to create a monthly time series from July 2019 to January 2020. The entire crop type mapping was implemented in Google Earth Engine (GEE). The crop type mapping was implemented using Machine Learning (ML). The Random Forest (RF) algorithm was applied to the time series of Sentinel 1 and cloud-free Sentinel 2A/B scenes available in the GEE platform. Fieldwork was carried out in December 2019 and January 2020 to take sample points representing different crop types in the study area. In this study, remote sensing techniques were used to map the extent of different crop types in Kharif 2019, estimate and analyse the yield and water productivity of these crops in Kharif 2019, and understand the water use dynamics in NLBC for three Kharif seasons from 2017 to 2019. Based on the newly developed and validated high resolution crop type map of Kharif 2019, cotton was the major crop in the NLBC with an estimated area of 196,278 ha (43% of total NLBC scheme area). Paddy and Red Gram were also extensively cultivated in the NLBC each respectively occupying 173,264 ha (38%) and 82,600 ha (18%) in the study season. In the 2018 Kharif season, there was around 74% increase in fallow area compared to 2017 Kharif season (2017: 98,907 ha / 2018: 171,796 ha). Around 55% of the total NLBC command area was cropped in multiple seasons in 2017 while it dropped to 27% in 2018.
Several studies have demonstrated that the interferometric phase when properly calibrated is related with the soil moisture change in the time span of the two SAR acquisitions. The phase calibration requires the mitigation of the atmospheric phase screen, the removal of the topographic effects and assumes no deformation. Recently, a new approach to estimate the soil moisture change based on the concept of bi-coherence and phase triplets, was proposed (de Zan et al., 2014). The closure phase is the phase that results from the cyclic product of three interferograms. The main advantage on the use of the closure phase instead interferometric phase, is that the closure phases are immune to all simple propagative effects like target displacement, delays in atmospheric propagation and topographic effects.
In this work we investigate the relation between the closure phases of interferometric C-band SAR images and the time varying soil moisture. In particular, we combine three interferograms obtained from three SAR images, of the same area acquired at different times, to derive maps of bi-coherence and phase triplet. A scattering model is used to estimate the time series of soil moisture from the sequence of phase triplet images.
The study area is located in a farm approximately 20 km East of Lisbon, Portugal, close to the Tagus River estuary. A set of five soil moisture sensors was deployed and set to record soil moisture in an hourly basis providing a 5-month long times series of in-situ measurements, between the 3rd Jan, 2019 and 15th May, 2019. The in in-situ measurements were transformed to phases using the scattering model. To simplify the experimental conditions, we selected a flat area to avoid the artefacts due to topography, a bare soil parcel to avoid volumetric surface scattering and that is in agricultural fallow period so there is no change in roughness, because it is well known the high sensibility of C-band to the vegetation growth and to surface roughness change due to plowing, tilling or harrowing.
Two sets of interferograms were computed from 19 C-band Sentinel-1 A/B images, acquired between 08 January 2019 and 5 May 2019, in Interferometric Wide swath (IW) – Single Look Complex (SLC) mode, only in vertical-vertical (VV) polarization using the ascending (track 45) and descending (track 52) passes. Each time series of SAR images were interferometrically processed combining six imagens for each reference image with a temporal baseline from a minimum of six days to a maximum of 30 days. The interferograms were processed according to the following processing chain: a) interferometric stack generation with all possible combination of the images; b) coregistration using precise orbits and external DEM; c) interferogram computation; d) earth curvature and topographic effect removal; e) coherence estimation using a window of 3 in azimuth and 3 in range; interferogram and coherence terrain correction and geocoding to the WGS84 UTM coordinate reference system. The interferograms were multi-looked with a 16x16 pixels window. For each pass, a system of equation with all computed interferograms, was solved to estimate the closure phase (or decorrelation phase). For each pixel, the system has 100 equations and 70 decorrelation phases. The decorrelation phases of the six days temporal baseline interferograms were compared with the in-situ soil moisture changes derived model phases.
The results show that there is a linear correlation between the in-situ derived modelled phases and the closure phases. The correlation coefficient was R2=0.76 and R2=0.86 for the descending and descending passes, respectively. However, a scale effect of the closure phases was found when compared with the derived model phases. The scale is about 10% for both passes, meaning that the estimated closure phases underestimate the soil moisture changes. The results show the high correlation between the closure phases and the model phases, which points out that it is possible to derive soil moisture variations using C-band Sentinel 1 A/B, as long as a scattering model is provided. The most promising result comes from the clear linear correlation found between modelled (derived from in-situ measurements) and observed phase triplets although a saturation effect was found that hinder its use in case of very dry soils.
After the second half of the nineteenth century, robust growth in the world economy including both industrial and agricultural sectors has led to an aggressive production and utilization of agriculture-based chemicals which often induced calamitous effects on the environment. Injudicious use of weedkillers/herbicides add organic pollutants in agricultural soils which have cascading future repercussions.
Weeds, which are defined as undesirable plant species, grow simultaneously along the crops and consume the nutrients which are mandated only for crop consumption. This inhibits the crop growth and harbours threat in terms of viral crop diseases and problematic insects. They are mostly damaging to crop yield if they have advantages over the crop in some or the other way. Mapping and removal of weeds in early stages of weed emergence will not only be relatively less complicated but also less time-and-cost consuming over later stages of the crop where there is high spectral overlap and heterogeneity. Therefore, there’s a need for mechanism which facilitates identification of crops and weeds in early stages so that irrespective of temporal change, crops and/or weeds can be segmented and classified across all phenological stages.
There are various publications dealing in discrimination of crops and weeds in digital images, over parameters of spectra, geometry, texture and height. Although commercially some solutions are oriented at early stages of weed growth, there’s a need for less computationally expensive solutions which can facilitate and augment segmentation. This produces a cascading requirement for tools which should be capable of periodical monitoring of crops for precise intervention for weed removal.
If spraying of weedicides could be limited to just the weed-affected areas, the ecological harm in terms of soil contamination and over-spraying onto crop regions can be reduced. This can be achieved when the weed information in aerial images is supported by geolocation information at millimetre level accuracy. The published assessments showed that crop and weed segmentation through use of machine learning and deep learning algorithms like ResNet, DenseNet, U-Net, DeepCluster DeepLab and VGGNet had limitations in terms of handling of spectral overlap and performance in heterogenous environments.
The current study is focused on facilitating crop segmentation across early stages of cauliflower in a heterogeneous environment where there are varying lighting conditions, different soil moisture levels, occlusion and infestation by weeds. The sensor in use is DJI Phantom IV Pro for capturing true colour images and Micasense RedEdge-M for capturing multi-spectral images of cauliflower crop field at different time intervals. The study area is a cauliflower field located in Department of Agriculture, University of Naples Federico 2, Portici, Italy. The classes of interest in the imagery are weeds and cauliflower leaves, The analysis over these images was done by making use of several band ratios such as Normalized Difference Vegetation Index (NDVI), Modified Soil Adjusted Vegetation Index (MSAVI2), some image super resolution techniques and morphological operations for optimization of segmentation outputs. The preliminary results have crop segmentation masks as the end output. The subsequent step would be automatic annotation. Current methods require annotating images in a manual fashion which is time consuming and tedious. The aim is therefore to facilitate a tool that improves the quality of the image in terms of level of details and crop characters in presence of weeds.
There is also a requirement to understand the spectral overlap among weeds and cauliflower leaves in order to quantify the correlation among both. The final goal is to improve the recognition of weeds and ameliorate the classification process amongst underlying species of weeds. Once the crops are segmented in the early stages of weed growth, these results can be a prerequisite for training and annotations. mapping weed affected areas for mechanized-spraying of herbicides through drones and/or rovers. Not only this will provide a huge relief from ecological perspective but also beneficial for any project from economical point of view.
Agriculture represents a cornerstone of alpine economies that is endangered in a warming and drier climate. In order to support a more resilient agricultural sector in the future, various systems exist to better manage water resources by quantifying irrigation water requirements. These systems usually estimate irrigation water requirements either as a difference between effective rainfall and evapotranspiration, or using water-balance hydrologic models; efficiency parameters are used to take into account water losses in distributing water to the crops. An open issue in such methods, which often rely on static crop coefficients, is how to dynamically keep track of crop evolution through time, e.g., in the form of crop characteristics like mowing and wetting-drying cycles.
In this work, we will discuss one method developed for the quantification of irrigation water requirements in Valle d’Aosta region with EO data. The method is based on a first-guess estimate of evapotranspiration based on Penman Monteith and an EO-based crop coefficient to convert the first guess in actual, crop-specific evapotranspiration. The EO-based crop coefficient is based on a Random Forest method combining site elevation, day of the year, and Sentinel 2 NDVI. Through EO data, this approach can better track dynamic crop characteristics and so provide more reactive estimates of evapotranspiration compared to static crop coefficients.
This method was compared with a traditional approach to the estimation of the crop coefficient based on literature data and showed significant deviations in the evaluation of annual irrigation water requirement. Results by this EO-based approach will also be compared with estimates of irrigation water requirements based on a spatially distributed hydrologic model including irrigation (Irri-Continuum). The method is currently being converted into a prototype for potential large-scale deployment as a real time, operational tool.
Detecting irrigation areas, volumes and timing is a crucial issue to efficiently manage water ressources and control agricultural practices. To that aim, soil moisture remote sensing measurement is undoubtedly a relevant tool to address this issue. Currently, most studies focus on the scale of the agricultural plot to optimise agricultural yields and use high-resolution satellite measurements such as Sentinel 1 and 2. At the same time, an estimate on a continental or global scale of the volumes of water exploited by irrigation is important for monitoring the evolution of groundwater resources and forecasting their evolution over several years. In this case, the use of satellites such as SMOS or SMAP, characterised by a lower spatial resolution, are particularly effective tools for characterising surfaces, volumes and dates of irrigation.
In this study, SMOS and SMAP soil moisture measurements are used in a methodology derived from an adaptation of the PrISM methodology developed in Pellarin et al. (2009, 2020) and Román-Cascón et al. (2017). The PrISM (Precipitation Inferred from Soil Moisture) methodology uses a simple precipitation/soil moisture model to derive a surface soil moisture temporal evolution based on a satellite precipitation product. The assimilation of SMOS/SMAP soil moisture measurements conducts to generate soil moisture maps (every 3h) over Africa, Arabian Peninsula and Middle East. The resulting maps are, by construction, not able to represent flooding or irrigation events. Thus, a comparison of SMOS/SMAP satellite measurements with simulated soil moisture maps allows the automatic detection of areas and time periods of flooding or irrigation of surfaces.
This methodology has recently been tested over the whole of Africa, Arabian Peninsula and Middle East at the spatial resolution of 0.25 and 0.10° using SMOS and SMAP measurements. The methodology allows to identify without ambiguity areas and periods of high irrigation (Morocco, Algerian and Tunisian coast, Kartoum region, Iraq, South Africa), but also very small areas located in the centre of Algeria, Egypt, Saudi Arabia, and in the southern part of South Africa (see attached figure). Of course, the method also highlights areas of natural flooding such as the Niger River Delta in Mali, Lake Chad, the Okavango Delta (Botswana) and Etosha (Namibia).
On the detected areas and periods, it is then possible to estimate the volumes and dates of irrigation by introducing water into the model in order to reproduce the temporal variations of the SMOS and SMAP measurements. In this presentation, we will present the evolution of irrigated areas and volumes over the period 2010-2021 over Africa, the Arabian Peninsula and the Middle East.
In cotton, an optimal balance between vegetative and reproductive growth will lead to high yields and water-use efficiency. An abundance of water and nutrients will result in heavy vegetative growth that promotes boll rot and fruit abscission, making a cotton crop difficult to harvest. Estimating vegetation variables such as crop coefficient (Kc), Leaf Area Index (LAI), and crop height using satellite remote sensing can improve irrigation management and growth inhibitor application to regulate the vegetative growth and optimize the yield. Optical and Synthetic Aperture Radar (SAR) satellite imagery can be a useful data source since they provide synoptic cover at fixed time intervals. Furthermore, they can better capture the spatial variability in the field compared to point measurements. Since clouds limit optical observations at times, the combination with SAR can provide information during cloudy periods. This study utilized optical imagery acquired by Sentinel-2 and SAR imagery acquired by Sentinel-1 over cotton fields in Israel. The Sentinel-2-based vegetation indices that are best suited for cotton monitoring were identified and the most robust Sentinel-2 models for Kc, LAI, and height estimation achieved R2=0.879, RMSE=0.0645 (MERIS Terrestrial Chlorophyll Index, (MTCI)); R2=0.9535, RMSE=0.8 (MTCI); and R2=0.8883, RMSE=10 cm (Enhanced Vegetation Index (EVI)), respectively. Additionally, a model based on the output of the SNAP biophysical processor LAI estimation algorithm was superior to the empirical models of the best-performing vegetation indices (R2=0.9717, RMSE=0.6). The most robust Sentinel-1 models were obtained by applying an innovative local incidence angle normalization method with R2=0.7913, RMSE=0.0925; R2=0.6699, RMSE=2.3; R2=0.6586, RMSE=18 cm for the Kc, LAI, and height estimation, respectively. This work paves the way for future studies to design decision support systems for better irrigation management in cotton, even at the sub-plot level, by monitoring the heterogeneous development of the crop from space and adapting the irrigation accordingly to reach the target development at different stages of the season.
The Mediterranean region (MR) includes the largest semi-enclosed sea on Earth and is an area of both exceptional biodiversity value and intense and increasing human activities. MR has a unique character as it is in a transition zone between temperate, cold mild-latitudes and the tropics with several large-scale atmospheric oscillations/teleconnection patterns. This determines a high temporal variability of climate which causes periods of excess water with widespread floods followed by long drought episodes and heat waves, making the region highly vulnerable to hydrological extremes. Therefore, resolving the water cycle over the MR is central for protecting people and guaranteeing water and food security.
Previous efforts to resolve the water cycle in the MR have mainly used model outputs or reanalysis and in situ data networks. In this context, the European Space Agency (ESA) has supported significant scientific efforts to advance the way we can observe and characterise the Mediterranean water cycle from satellites with Watchful, Irrigation+, and WACMOS-Med projects. For instance, the WACMOS-Med considered several novel techniques to estimate the different components of the Mediterranean water cycle estimated by satellite observations while minimising the residual errors. WACMOS-Med provided a rational assessment of the different limitations of current satellite technology to characterise in a consistent and accurate manner the different components of the water cycle. However, limitations associated to resolution in space and time, accuracies, uncertainty definition and inter-product consistency hinder the practical use of the products for operational application in several domains (e.g., agriculture, water resource management, hydro-climatic extremes and geo-hazards) over the MR.
Here we present a new ESA project “4DMED-Hydrology” which aims at developing an advanced, high-resolution, and consistent reconstruction of the Mediterranean terrestrial water cycle by using the latest developments of Earth Observation (EO) data as those derived from the ESA-Copernicus missions. In particular, by exploiting previous ESA initiatives, 4DMED-Hydrology intends 1) to show how this EO capacity can help to describe the interactions between complex hydrological processes and anthropogenic pressure (often difficult to model) in synergy with model-based approaches; 2) to exploit synergies among EO data to maximize the retrieval of information of the different water cycle components (i.e., precipitation, soil moisture, evaporation, runoff, river discharge) to provide an accurate representation of our environment and advanced fit-for-purpose decision support systems in a changing climate for a more resilient society.
We organize the project in four consequent steps: 1) developing high-resolution (1 km, daily, 2015–2021) EO-based datasets of the different components of the water cycle by capitalizing on Sentinel missions’ capabilities and previous ESA projects; 2) merging these datasets to obtain land water budget closure and providing a consistent high-quality merged dataset; 3) addressing major knowledge gaps in water cycle sciences enhancing our fundamental scientific understanding of the complex processes governing the role of the MR in the Earth and climate system with the water cycle; 4) transferring novel science results into solutions for society via four user-oriented case studies focusing on flood and landslide hazard, drought and water resources management by involving operational agencies, public institutions and economic operators in the MR. 4DMED-Hydrology will focus on four test areas, namely the Po river basin in Italy, the Ebro River basin in Spain, the Hérault River basin in France and the Medjerda River basin in Tunisia, which are representatives of climates, topographic complexity, land use, human activities, and hydrometeorological hazards of the MR. The developed products will be then extended to the entire region. The resulting EO-based products (i.e., experimental datasets, EO products) will be distributed in an Open Science catalogue hosted and operated by ESA.
Soil moisture (SM) is a critical variable in the understanding of the climate-soil-vegetation system. Typically, SSM data can be applied to different disciplines depending on the available spatial scales: while climatological and meteorological studies employ SM data at a global coarse scale and hydrological studies employ SM data at catchment level, administrative and agricultural applications need SM data at field and subfield scale (tens to hundreds of meters).
We propose a new approach to obtain very high spatio-temporal resolution SSM (20 m, every 3 days) product from remotely sensed satellite data only. The method employs a modified version of the Dispatch (Disaggregation based on Physical And Theoretical scale Change) algorithm to disaggregate SMAP Surface Soil Moisture (SSM) product at a 20 m spatial resolution through the use of a sharpened Sentinel-3 Land Surface Temperature (LST) data. It was possible to reach the 20 m spatial resolution thanks to the use of high resolution LST maps from the sharpening of Sentinel-3 1km daily LST and Sentinel-2 20 m reflectance bands, which overcame the limitations linked to the absence in the Sentinel constellation of a thermal sensor with fine resolution.
First, the proposed high resolution SSM product was validated against available in-situ data from two different fields, and second, it was also compared with two coarser SSM products at 1 Km developed with the same disaggregation techniques but using LST from Sentinel-3 and MODIS. From the correlation between in-situ data and all the disaggregated SSM products, a general improvement was found in terms of Pearson's correlation coefficient (R) for the proposed high resolution product with respect to the two products at 1 km.
The improvement was especially noticeable during the summer season, where field-specific irrigation practices were better captured at high resolution: consistently higher values of SSM could be observed during the warmest months for irrigated fields when compared to rainfed fields. The capability of the product to recognize irrigated fields was studied further comparing the distribution of SSM in this new high-resolution product against the information contained in the Catalan Geographic Information System for Agricultural Parcels (SIGPAC) data, which contains information on the presence of irrigation for each field.
Additionally, a sub-field scale analysis was performed using all the in-situ sensors installed in the two locations available. The validation of SSM at sub-field scale showed an improvement in the correlation with in-situ data with respect to lower resolution products.
The lack of consistent observations on the effective use of the water resource in agriculture hindering the full implementation of the Water Framework Directive (WFD).
Although statistical data sources are able to give a picture at the national scale, often they are not exhaustive at regional and local scales. The Italian Ministry of Agriculture has adopted specific actions (Decree 31/07/2015) for monitoring irrigation areas and volumes on a regular basis to improve the compliance to the WFD. For this purpose, Earth Observation data from the ESA Sentinel-2 satellites are representing a very valuable source of information to fill the gap between research and application for the assessment of water uses in agriculture.
For this study the procedures developed in the context of the INCIPIT project for assessing irrigated areas and corresponding volumes will be illustrated. The objective of the INCIPIT project (PRIN MIUR 2017) is to develop a methodological framework for supporting and planning the irrigation water uses at different spatial scales, and under different hydraulic and meteorological conditions in six Italian regions (Apulia, Campania, Emilia Romagna, Lombardia Sardinia and Sicily).
These procedures exploit the full spectral range of Sentinel-2 data, from visible to shortwave infrared, and temporal domain, thanks to the high revisit time of the two satellites, to monitor the development status of crops.
In detail, different machine learning algorithms have been tested (Support Vector Machines, single decision trees (DTs), Random Forests, Boosted DTs, Artificial Neural Networks, and k-Nearest Neighbours) for mapping irrigated and non-irrigated areas from dense temporal series of vegetation indexes [1], and surface water status derived on SWIR bands with the OPTRAM model [2].
The assessment of the irrigation volumes has been carried out by using the IRRISAT© methodology, based on Penman-Monteith equation, properly adapted with canopy parameters namely crop height, Leaf Area Index, and surface albedo [3] also derived from Sentinel-2 data. The results will be presented from case-studies for the irrigation seasons 2019 and 2020 in two Irrigation and Land Reclamation Consortium located in Campania and Sardinia regions, where the accuracy assessment of the proposed procedures has been carried out with ground-truth data related to effective irrigated areas and measured irrigation volumes at field and district scales.
[1] Falanga Bolognesi, S., Pasolli, E., Belfiore, O. R., De Michele, C., & D’Urso, G. (2020). Harmonized Landsat 8 and Sentinel-2 time series data to detect irrigated areas: An application in Southern Italy. Remote Sensing, 12(8), 1275.
[2] Sadeghi, M., Babaeian, E., Tuller, M., & Jones, S. B. (2017). The optical trapezoid model: A novel approach to remote sensing of soil moisture applied to Sentinel-2 and Landsat-8 observations. Remote Sensing of Environment, 198, 52-68.
[3] Vuolo, F., D’Urso, G., De Michele, C., Bianchi, B., Cutting, M. 2015. Satellite-based Irrigation Advisory Services: A common tool for different experiences from Europe to Australia. Agricultural Water Management, 147, 82–95.
Under the expected growth of population and climate projections, a large part of the world’s population is going to face conditions of increasing water scarcity and food insecurity. The higher food demand being a consequence of growing global population requires an increase in food production which can either be met by expanding the area under cultivation or intensifying the use of the already existing agricultural land. At global scale, it is assumed that these options have the potential to fulfil the growing global food needs. However, regions under unsuitable food production conditions (e.g. unsuited climate, soil, and relief) might have to increase their food production beyond a sustainable stage or have to rely on food imports to ensure food security for the population.
Iran is a prominent example for such conditions, since the country has been facing a rapid population growth under unfavorable political conditions leading to the promotion of the paradigm of achieving self-sufficiency in production of the main staples, such as wheat and rice. This development has been accompanied by decade-long embargo policies against the country preventing extensive food imports. Thus, the country has significantly increased local food production during the past 30+ years although large parts of Iran are unsuited or of limited suitability for agricultural purposes. This development has led to a high and ever increasing water demand towards a very unsustainable use of renewable water sources. At present, Iran uses more than 80 % of its total renewable freshwater resources, while 40% is considered being the limit to ensure environmental sustainability.
Major parts of Iran experience very limited water availability. More than 80% of the country are under arid (65%) or semi-arid (25%) climate conditions and 75% of the total precipitation is received during the winter season when it is not needed for the agricultural sector. Mesgaranet al. (2017) rate almost 80% of Iran’s land as (very) poorly suited or unsuited for cropping. Since thousands of years people had to cope with this situation and the Persians once were known for their advanced and sustainable (adapted to local conditions) water management, e.g. building subsurface qanats to efficiently transfer water from the mountains to the adjacent plains and valleys.
However, in the last few decades the rapid socioeconomic development and climatic change towards drier conditions have changed this situation completely. Iran’s population has grown rapidly from approx. 20 million in 1960 to more than 80 million people today, with 70% living in urban areas (27% in the 1950s), which creates a high pressure on regional available water resources. From the 1960s on, Iran started a big modernization project, replacing traditional sustainable irrigation techniques (e.g. qanats) by electric pumps for groundwater exploitation. At the same time, hundreds of dams were built with more to come in the future and large water transfer projects across major drainage divides have been put into place in order to meet the growing water demand of a steadily increasing population.
As a result the agricultural sector is responsible for approx. 90 % of today’s annual water consumption in Iran and thus drives the country’s unsustainable water use. Approximately 50% of the water used for agriculture comes from tapping underground aquifers making Iran one of the top groundwater miners in the world and resulting in a severe decline of groundwater levels throughout the country.
In our research we analyze the interrelations between vegetation growth, land cover dynamics, and natural water availability in Iran with the goal of evaluating and quantifying vegetation growth related to agricultural land use at different scales from country-wide to regional levels. For this purpose, we use globally available EO products (Sentinel-2, MODIS, GRACE(-FO), ESA annual CCI land cover, Copernicus Global Land Service Land Cover 100m) and global scale reanalysis climate model data (ERA5-Land). This work has been carried out in the frame of the SaWaM project (Seasonal water resources management in semi-arid regions: Transfer of regionalized global information to practice) within the GRoW initiative (Global Resource Water, bmbf-grow.de/en) funded by the German Ministry for Education and Research (BMBF).
Vegetation growth, its temporal dynamics and trends are analyzed by satellite remote sensing data of different scales and periods using MODIS time series data for long-term analyses at national scale and Sentinel-2 time series data for regional analyses of higher spatiotemporal detail covering the last 5 years. In this context we have explored and developed multiple approaches aiming at the differentiation between irrigated and rainfed agriculture based on vegetation growth dynamics and meteorological water availability derived from ERA5-Land reanalysis data (i.e. total precipitation, potential evaporation, temperature, and derived aridity index).
Methodological developments have been accompanied by field work in collaboration with our local Iranian partner – the Khuzestan Water and Power Authorities (KWPA) – conducted within the Karun Basin hosting the longest and largest river by discharge in Iran. So far, five dams have been built on the main Karun river alone to generate hydroelectric power and provide flood control. Due to this collaboration with local Iranian partners, we have been able to validate our remote sensing based results and evaluate the applicability of existing global data products at a regional scale focusing on selected areas within the Karun Basin.
At national scale we analyzed the spatiotemporal development of agricultural areas based on remotely-sensed vegetation growth dynamics and dynamics of the hydrological water storage (GRACE(-FO)) and meteorological conditions (ERA5-Land). Despite increasing hydrometeorological water scarcity, Iran has experienced an agricultural expansion of approx. 27,000km² (9%) between 1992 and 2019 and an intensification of cultivation within existing agricultural areas, indicated by significant positive vegetation trends within 28% of the existing croplands (i.e. approx. 48,000km²).
This agricultural intensification is particularly evident in the largely cultivated relatively wetter northwestern basins of Iran under mainly semi-arid conditions, where more than 95% of the observed significant agricultural vegetation trends are positive. Besides these wetter and thus more suitable areas for agricultural use, positive vegetation trends are also evident in the central and southeastern parts of Iran under (hyper-)arid conditions, where limits in natural surface water availability and high evapotranspiration rates hinder or prevent natural vegetation growth unless intense irrigation is put into place. Overall, the results show a substantial agricultural expansion and intensification during the last two decades despite decreasing hydrometeorological water availability and a cultivation of (hyper-)arid land despite its natural unsuitability for vegetation growth.
Besides this main tendency towards intensified agriculture, degrading agriculture (i.e. cropland with negative vegetation trend) could also be observed. In total, 6% of all agricultural areas in Iran are characterized by a significant negative vegetation trend. Moreover, we analyzed the vegetation trends against aridity and irrigation intensity where the latter one is represented by a proxy defined as the probability that the observed vegetation growth requires additional non-meteorological water supply. These results have revealed an increasing share of negative agricultural vegetation trends towards more arid conditions and higher irrigation intensities.
Among intensively irrigated regions, the share of areas characterized by significant negative vegetation trends amounts to approx. 50% and 70% under arid and hyper-arid conditions, respectively. These findings suggest that by now, in these dry regions the unsustainable water use has reached a level of unsustainable cultivation that eventually has been resulting in reduced agricultural intensity or even uncultivated abandoned fields (e.g. in the region around the city of Isfahan). Our results also have shown that in the central basins of Iran, for up to 30% of the agricultural area the vegetation growth dynamic is highly positively correlated with the decreasing total water storage (TWS), indicating that the reduced water availability (i.e. reduced groundwater storage due to irrigation) results in a decreasing cultivation intensity as a long-term consequence.
The obtained results have shown the potential of satellite remote sensing based time series analysis covering large areas for analyzing vegetation growth in combination with meteorological parameters in order to assess the sustainability of agricultural land use related to available water resources under (semi) -arid climatic conditions for the entire Iran. The obtained results are of so far unprecedented spatiotemporal detail allowing subsequent analyses at different spatial and temporal scales as well as their continuation into the future increasingly relying on higher resolution Sentinel-2 data whose global coverage enables transferring the developed approach to other (semi)-arid regions worldwide.
Soil moisture is one of the most relevant geophysical variables that plays an important role in a wide range of applications such as climatology, agricultural practices and drought monitoring. In 2010, it was recognised as an Essential Climate Variable (ECV) by the Global Climate Observing System (GCOS) and its importance has been stressed in several scientific projects and missions. During the years, a major effort has been dedicated to the estimation of the water content in the soil from airborne and satellite SAR (Synthetic Aperture Radar) data, which ensure high spatial resolutions in any sunlight condition and almost any weather condition. A contribution to this effort also derives from the increasing availability of data collected at different frequency bands and polarizations (i.e., dual-pol and full-pol data). The objective of this work is to make a comparison between the performances offered by data acquired at C-band and L-band in order to evaluate their sensitivity to soil moisture in the context of a retrieval process. The analysis is carried out considering a time-series of dual-pol Sentinel-1A (C-band) and full-pol SAOCOM-1A (L-band) acquisitions over an agricultural area located near the city of Monte Buey in the Córdoba Province, Argentina. In particular, this area belongs to the core validation sites of the SMAP (Soil Moisture Active Passive) mission and is a validation site for the SAOCOM soil moisture products generated by the Argentinian Space Agency (CONAE). During the 2019-2020 season a field campaign was conducted by CONAE in this region simultaneously with the SAOCOM-1A acquisitions allowing to obtain important in situ measurements, such as soil moisture content and other variables related to vegetation (plant height, growth stage), for several regions of interest (i.e., crop fields). In this work, SLC (Single Look Complex) quad-pol data collected between October 2019 and February 2020 during the descending SAOCOM-1A overpasses were considered with a revisit time of sixteen days and a resolution of 10m x 6m in ground range and azimuth, respectively. At the same time, a time-series of Sentinel-1A data was also considered with a revisit time of twelve days. In the case of Sentinel-1A, the GRD (Ground Range Detected) data, with a resolution of 10m x 10m in ground range and azimuth, were used. The study was performed considering soil moisture data recorded at different stations belonging to a permanent network since the acquisition days are different from those of the SAOCOM mission and the in-situ data collected during the 2019-2020 field campaign. The temporal evolution of the backscattering coefficient sigma-nought at different polarizations was extracted from the calibrated and geocoded data for the different agricultural fields and it was compared to soil moisture variations. The sigma-nought trend acquired over corn fields has been interpreted also with the aid of the electromagnetic model developed at Tor Vergata University. The latter is a fully polarimetric model that allows separation of the contributions coming from different scattering sources in the vegetation canopy. A comparison of simulated and measured polarimetric will be presented. In addition, the evolution of the backscattering coefficient was also related to other parameters such as the Radar Vegetation Index and the NDVI variations from the Sentinel-2 satellite. The study provides an insight into the possible synergy of a long term stack of C-band and L-band radar data for soil moisture monitoring. It is made possible by the systematic acquisition of SAOCOM-1A and Sentinel-1 in a site well equipped with ground truth data. This is of extreme importance in view of the future possibilities offered by the launch of the NASA-ISRO NISAR mission (L-band) and especially the ESA ROSE-L mission that will operate in a synchronous manner with Sentinel-1.
Water abstractions for irrigation are estimated to account for 70% of total water abstractions on the global scale and the sector has a significant impact on the water cycle. These impacts are rarely taken into account in hydrological modeling studies and the models which do incorporate irrigation processes typically require maps of irrigated areas as input. However, the spatial and temporal mapping of irrigated areas in many places is still incomplete. A number of global maps exist, but these products have a coarse resolution and represent a snapshot in time while irrigated area extent is dynamic across space and time depending on multiple factors such as water availability and weather. For many applications in water and agriculture such as basin-scale modeling, dynamic irrigation maps covering the required area are needed for better estimation of abstracted water.
The main objective of this study is to develop an approach to estimate dynamic irrigated area in a hydrological year or using a remote sensing driven pixel based soil water balance model (PixSWaB, Seyoum et al. in preparation). The model has low data input requirements which are satisfied by remote sensing and global datasets, a limited number of calibration parameters and can be run at any chosen spatial resolution. This newly developed model computes green evapotranspiration (ETgreen) by tracking the amount of water available for ET in the soil from precipitation, and blue evapotranspiration (ETblue) is then obtained from the difference between remotely sensed actual ET and ETgreen. While irrigated areas can be provided as input to the model to adjust a consumed fraction and estimate water supply, the model can be run without this information and produce maps of ETblue. The irrigated area is then derived for any region of interest by applying an unsupervised Dynamic Time Warping (DTW) approach on the time series of monthly ETblue over a hydrological year. As remotely sensed ET is the main driver in the computation of the irrigated areas, the model is run here at 3 different resolutions: at 100m in the Litani River Basin in Jordan with WaPOR Level 2 ET data, at 250m in the Urmia Lake Basin in Iran with WaPOR Level 1 ET data, and at 1km in the Krishna River Basin in India with SSEBop ET data. The advantage of this method over traditional irrigation mapping using multispectral data is that it does not require local training samples and it takes into consideration the varying levels of water depletion between irrigated and non irrigated areas which may not appear when considering only spectral signatures.
Our results are validated using high resolution irrigated area maps derived from Sentinel 1 data. These maps were derived by applying supervised DTW on a time series of backscatter signals by comparing the pixel wise temporal signals to a set of known temporal patterns obtained from irrigated area pixels (training samples) in the region. In addition, the outputs are compared to available maps including the Global Irrigated Area Maps (GIAM) released by International Water management Institute (IWMI) and national statistical estimates and maps where available.
While the irrigation of agricultural field crops land of around 1,560 million hectares worldwide, consuming thereby ~ 70% of the total freshwater, is increasingly focusing on optimized irrigation management, it is largely overlooked that pasture and grass areas cover almost twice the area worldwide with around 3,200 million hectares. Although only small areas are irrigated relative to arable land, this is contrasted with areas, especially in urban areas, which consume large amounts of fresh water and whose water consumption has so far been largely ignored.
This is particularly glaring on the extensive turf areas of commercial airports, where sufficient irrigation plays a vital role. In addition to dry spells induced by climate change, these are exposed to high thermal stress due to hot engine exhaust fumes. If the surfaces become too dry, a high risk of erosion due to whirled up dust can lead to engine damage to the aircraft. On the other hand, wet areas attract birds with the risk of bird strikes, which always poses considerable safety risks to air traffic.
It is therefore necessary to find the balance between sufficient, but not excessive, water supply in irrigation management and to develop suitable methods for this. This typical optimization problem to increase efficiency also focuses on the effects of resource conservation with regard to the consumption of the drinking water used and also on minimizing the replacement of the turf plots destroyed by drought of an expensive special turf developed for airports.
In the ESA-funded TIMM (Turf Irrigation Moisture Monitoring) project, Spatial Business Integration, Germany, has developed an irrigation map that is currently being tested at one of Europe's largest commercial airports, Frankfurt Airport. On the basis of SAR data, soil moisture maps are derived, calibrated with permanently installed terrestrial soil moisture measuring instruments and used with data from the short-term weather forecast as a basis for the creation of an irrigation map until the next satellite overpass. In addition to the roll-out to other commercial airports, the methods developed here can be transferred to urban green areas such as parks or golf courses.
Irrigated agriculture accounts for more than 80% of Saudi Arabia's water demand, consuming more than 20 billion cubic meters of non-renewable groundwater resources each year. The depletion of aquifers threatens water security in many other regions of the world, yet abstraction is often not managed or even monitored.
Earth observation through satellite platforms coupled with models enables mapping of irrigated areas and their associated water use at large scales. For example, object-based image analysis of maximum annual NDVI allows delineating thousands of center-pivot fields (and other orchard or plantation fields) on a semi-automated basis. At the same time, combining this information with weather data and crop water use models enables the estimation of groundwater use from individual fields to regional scales.
While efforts have demonstrated application potential at regional to national scale, adapting existing frameworks towards an operational and fully automated product is a substantial technical challenge. It requires acquiring and processing terabytes of satellite imagery, numerical weather prediction data, and managing all intermediate and final outputs, as well as computing storage and resources. Water management agencies often lack the technical capability and/or expertise to manage such research-level processing frameworks.
Fortunately, interest from the private sector in planetary-scale geospatial data analysis has resulted in the creation of cloud-based platforms dedicated to Earth observation research. One such platform is the Google Earth Engine (GEE), which hosts petabytes of geospatial data, including the collections of images from the Landsat (USGS/NASA) and Sentinel (ESA/Copernicus) missions. More notably, GEE also offers a dedicated massive computing infrastructure to explore and analyze these datasets directly with free access to both the data and computing resources for academic research.
To take full advantage of these resources, it is important to understand the available processing paradigms and their best fit for different mapping applications. For example, some machine learning methods (including classification and clustering) can be processed directly on GEE, while others such as deep-learning still require some “on-premises” resources and therefore data retrieval as well.
In this work, we explored the potential of adapting two important components of an irrigated water use monitoring framework for its direct use in GEE. First, we explored several machine learning approaches for automated mapping of center-pivot fields at national scale. Secondly, we adapted two crop water-use models (PT-JPL and TSEB) for GEE with direct use of Landsat-8 and Sentinel-2 imagery and meteorological data from the European Centre for Medium-Range Weather Forecasts (ECMWF). Importantly, model development was adapted so that the same software can be used both with on-premises data and resources, or directly within GEE. Preliminary tests at a national-scale, fully automated, and directly on GEE show the potential of mapping tens of thousands of center-pivot fields, including previously unmapped remote and/or isolated irrigated fields. Retrieving water use estimation from individual fields is possible entirely within GEE, without the need to download a single satellite image.
These components will form part of a cloud-based operational product with the goal of providing water management agencies a tool to map irrigated agriculture and estimate their water use.
Here, maps of primary production in the coastal waters of the East Sea were generated using sea surface chlorophyll-a concentrations (CHL), photosynthetically available radiation (PAR), euphotic depth induced by GOCI along with sea surface temperature (SST) from satellites of foreign countries as input parameters, and carried out a sensitivity analysis for each parameters. On 25th of July in 2013
when a wide cold waters appeared and on 13th of August in 2013 when a big harmful algal bloom existed in the study area, it shows high productivities with averages 1,012 and 1,945 mg C m-2 d-1, respectively. On August 25, 2013, when the cold waters and red tide retreated, it showed an average of 778 mg C m-2d-1, similar to the results of the previous analysis. As a result of the sensitivity analysis, PAR did not significantly affect the results of the primary production, but the euphotic depth and CHL showed above average sensitivity. In particular, SST had a large influence to the results, thus we could imply that an error in SST could lead to a large error in the primary production. This study showed that GOCI data was available for primary production study, and the accuracy of input parameters might be improved by applying GOCI, which can acquire images 8 times a day, making it more accurate than foreign polar orbit satellites and consequently, it is possible to estimate highly accurately primary production.
The increase of atmospheric CO₂ levels by as much as about 10% since the beginning of 21st century and its impact on the Earth’s climate and the biosphere represent a major concern. A compilation of in-situ data over the global coastal ocean indicates that the world’s coastal shelves absorb about 17% of oceanic CO₂ influx, although these areas represent only 7% of the oceanic surface area (Borges, 2005; Cai, 2011; Laruelle et al., 2010). However, large uncertainties in coastal carbon fluxes in the coastal margins exists due to the under sampling of the coastal ocean in both space and time. Indeed, Satellite remote sensing, in conjunction with in-situ data, allow the collection of various physical and biological parameters at regional and global scales at different temporal resolutions not accessible from other in-situ observation methods.
The main objective of the CO2COAST project (ANR funding) is to estimate the surface-ocean CO₂ partial pressure, pCO₂w, CO₂ flux, and associated uncertainties, from satellite remote sensing over the global coastal waters at high spatial resolution (1kmx1km). Based on these estimations, the respective contribution of the estuaries vs. continental shelves to the CO₂ fluxes will be evaluated over the global coastal waters. The global coastal database used to accomplish this aim is constituted of 11,36 10⁶ in situ data points of pCO₂, (SOCAT database) for which 580 10³ satellite match-up database (from 1997 to 2020) has been built (first at 4 km, and later at 1 km). This match-up database gathers in-situ pCO₂w, and satellite measurements of remote sensing reflectance, Rrs, chlorophyll concentration, Chl, absorption by colored dissolved organic matter, acdom, sea surface salinity, SSS, and temperature, SST. Such a multidimensional dataset requires “intelligent investigation” using machine learning methods (ML) to exploit spatial and temporal complex structures, find patterns, and fuse heterogeneous sources of information efficiently.
In a preliminary study, a ML algorithm has been applied to test two approaches, global and class-based regression, to estimate pCO₂w over global coastal waters. The results were advantaging using the class-based approach. This result highlights that the relationship between physical and biogeochemical factors differ from one class to another while explaining the variability of pCO₂w, connoting regional dependencies. In this same study, the input parameters were Rrs, SST, SSS, and the coordinates (Lat, Lon). However, in a second stage, testing other configurations involving different input parameters (time, Chl, acdom, etc) is essential to evaluate the contribution of these parameters to accurately estimate the pCO₂w. For that, another ML algorithm is applied; 2S-SOM (Yala et al., 2019, El Hourany et al., 2021), a variant algorithm of the Self-organizing maps algorithm (SOM, Kohonen, 1993). Through an unsupervised learning and neural network classification, the dataset is finely clustered while evaluating the weights of each parameter in each cluster. These weights are automatically assigned while minimizing the intra-class variance of each cluster. The application of such ML algorithm will allow to better understand the drivers of the pCO₂w variability and to estimate this latter while using biotic and abiotic parameters such as Rrs, SST, SSS, acdom and Chl.
The methodology under development is instrumental to build a coherent and robust satellite database of coastal pCO₂w at high spatio-temporal resolution (Daily, 1-4 km product, from 1997-2021).
The Particulate Organic Carbon (POC) pool plays a fundamental role in exporting carbon from the surface to the deep ocean through a series of biogeochemical processes known as the ocean biological carbon pump. However, to capture the dynamics of the POC pool and its role in the biologically-mediated part of the ocean carbon cycle, it is essential to capture consistent long-term time series data with adequate spatial resolution suitable for climate studies. The Ocean Colour Climate Change Initiative (OC-CCI) version 5 products (chlorophyll-a concentration, remote-sensing reflectances, and inherent optical properties) provide over two decades of consistent, error characterized, multi-sensor merged satellite data (1997-2020). Here we evaluate eight candidate POC algorithms applied to the OC-CCI data. The tested algorithms included those that had shown relatively good performance in earlier inter-comparison studies, as well as new algorithms that have emerged since then. All candidate algorithms were carefully validated using statistical metrics suggested by the original developers and the largest collection of in situ and satellite match-up data that we have been able to assemble. The algorithm that performed best was then tuned using our compiled match-up dataset to estimate the global POC pool in the mixed layer with high confidence from satellite observations. The relationship between POC and phytoplankton carbon and chlorophyll-a was analysed to further assess the performance of the POC algorithms. The new satellite-derived POC products were then used for time series analysis to investigate trends in the global POC pool.
Phytoplankton in the sunlit layer of the ocean act as the base of the marine food web fueling fisheries, and also regulate key biogeochemical processes such as exporting carbon to the deep ocean. Phytoplankton composition structure varies in ocean biomes and different phytoplankton groups drive differently the marine ecosystem and biogeochemical processes. Because of this, variations in phytoplankton composition influence the entire ocean environment, specifically the ocean energy transfer, the deep ocean carbon export, water quality (and by that also the human health, e.g., when certain species cause harmful algal blooms). As one of the algorithms deriving phytoplankton composition from space borne data, within the framework of the EU Copernicus Marine Service (CMEMS), OLCI-PFT algorithm was developed using multi-spectral satellite data collocated to an extensive in-situ PFT data set based on HPLC pigments and sea surface temperature data (Xi et al. 2020, 2021; https://marine.copernicus.eu/). By using multi-sensor merged products and Sentinel-3 OLCI data, the algorithm provides global chlorophyll a data with per-pixel uncertainty for diatoms, haptophytes, dinoflagellates, chlorophytes and prokaryotic phytoplankton spanning the period from 2002 until today. Due to different lifespans and radiometric characteristics of the ocean color sensors, it is crucial to evaluate the CMEMS PFT products to provide quality-assured data for a consistent long-term monitoring of the phytoplankton community structure. In this study, using in-situ phytoplankton data (HPLC pigments) and hyperspectral optical data collected from expeditions in the trans-Atlantic region, we aim to 1) validate the CMEMS PFT products and investigate the continuity of the PFTs data derived from different satellites, and 2) deliver two-decade consistent PFT products for time series analysis with PFT uncertainty accounted. For the latter we expect to determine variation of the surface phytoplankton community structure targeting different biogeochemical provinces.
References
Xi, H., Losa, S.N., Mangin, A., Garnesson, P., Bretagnon, M., Demaria, J., Soppa, M.A., d’Andon, O.H.F., Bracher, A., 2021. Global chlorophyll a concentrations of phytoplankton functional types with detailed uncertainty assessment using multi-sensor ocean color and sea surface temperature satellite products. Journal of Geophysical Research-Oceans, doi: 10.1029/2020JC017127
Xi, H., Losa, S.N., Mangin, A., Soppa, M.A., Garnesson, P., Demaria, J., Liu, Y., d’Andon, O.H.F., Bracher, A., 2020. A global retrieval algorithm of phytoplankton functional types: Towards the applications to CMEMS GlobColour merged products and OLCI data, Remote Sensing of Environment, doi:10.1016/j.rse.2020.111704
Primary production by marine phytoplankton is one of the largest fluxes of carbon on
our planet. In the past few decades, considerable progress has been made in estimating global
primary production at high spatial and temporal scales by combining in situ measurements of
photosynthesis-irradiance (P-I) parameters with remote-sensing observations of phytoplankton biomass. One of the major challenges in this approach lies in the assignment of the appropriate values for these model parameters that define the photosynthetic response of phytoplankton to the light field. In the present study, a global database of in situ measurements of P-I parameters and a 23-year record of climate-quality satellite observations were used to assess global primary production and its variability with seasons and locations as well as between years. In addition, the sensitivity of the computed primary production to potential changes in the photosynthetic response of phytoplankton cells under changing environmental conditions was investigated. Global annual primary production varied from 48.7 to 52.5 Gt C/yr over the period of 1998-2020. Inter-annual changes in global primary production did not follow a linear trend and regional differences in the magnitude and direction of change in primary production were observed. Trends in primary production followed directly from changes in chlorophyll-a and were related to changes in the physico-chemical conditions of the water column due to inter-annual and multi-decadal climate oscillations. Moreover, the sensitivity analysis in which P-I parameters were adjusted by ±1 standard deviation showed the importance of accurately assigning photosynthetic parameters in global and regional calculations of primary production. The light-saturation parameters of the P-I curve showed strong relationships with environmental variables such as temperature and had a practically one-to-one relationship with the magnitude of change in primary production. In the future, such empirical relationships could potentially be used for a more dynamic assignment of photosynthetic rates in the estimation of global primary production. Relationships between the initial slope of the P-I curve and environmental co-variables were more elusive.
Phytoplankton are responsible for releasing half of the World's oxygen and for removing large amounts of carbon dioxide from the surface waters. Despite many studies on the topic conducted in the past decade, we are still far from a good understanding of ongoing rapid changes in the Arctic Ocean, and how they will affect phytoplankton and the whole ecosystem, mainly because scientific community cannot keep up with the pace of these changes. An example is the difference in Net Primary Production (NPP) modeling estimates, which differ two times globally and fifty times when only the Arctic is considered. There is a difference between Net Primary Production (NPP) and Net Community Production (NCP). NPP shows the growth rates of phytoplankton, while NCP includes NPP and heterotrophic respiration, or the growth rates of the heterotrophs as well. We are studying the relationship between NCP, NPP and various environmental factors. Our hypotheses are: 1) we could obtain more accurate NCP and NPP estimates using regionally developed algorithms based on optical in-situ data, and 2) variability of phytoplankton is closely linked to water stratification in Atlantic Waters and dissolved organic matter influenced by river runoff in the East Greenland Current. We use the in-situ data to validate the satellite Greenland Sea parameterization of global primary production model, modernise the input empirical parametrisations, and analyse the factors influencing primary production patterns in the region. We use the in-situ data from the Institute of Oceanology of Polish Academy of Sciences expeditions to the Greenland Sea each year since 2013, together with the North Polar Institute Fram Strait expedition in 2021, representing the large dataset of bio-optical data with parts of it yet unpublished. Besides, we use satellite GlobColour chlorophyll, photosynthetically active radiation and AVHRR sea surface temperature. Resulted regional NCP and NPP models can be further used for system modelling of the dynamics of Arctic Ocean ecosystem and be one of the components in the ecosystem-based management of the region.
Dramatically changing climate in the Arctic is changing the hydrology and the biogeochemistry of rivers. Vice versa, river water can be a powerful indicator for the impact of climate change since river biogeochemistry and discharge integrate upstream terrestrial and aquatic environmental processes over a defined watershed. In Arctic catchments, permafrost is warming and thawing, releasing much organic carbon that was previously frozen, thus inactive in the carbon cycle. Long-term global Climate Change Initiative (CCI) remote sensing based datasets are a powerful tool to observe changes across terrestrial, coastal and marine environments in high frequency and throughout long time series.
In this study, we show the potential of CCI Ocean Color data to retrieve the optical properties of the dissolved organic matter (CDOM) and relate them to riverine organic carbon fluxes on a pan-Arctic scale for the last 23 years. Further, we relate riverine discharge and terrestrial CCI and ERA5 Essential Climate Variables (ECVs) to environmental processes that drive the seasonal and interannual variability as well as long-term trends.
Arctic river water is optically dominated by coloured dissolved organic matter allowing rather simple band ratio retrievals a better performance compared to complex retrieval algorithms (e.g. semi-analytical & neural networks). Here, we use a ratio between 665nm and 512 nm and calibrate this with an extensive in situ dataset.
The results show that extraction of CCI Ocean Color band ratio in the fluvial-marine transition zones, in combination with known relationships between optical properties of organic matter and its concentrations, provides excellent estimates for dissolved organic carbon (DOC) when compared to in situ data. The seasonal and interannual variability in DOC export by Arctic rivers is dominantly driven by large-scale precipitation anomalies within the river catchments. CCI Permafrost ECVs such as permafrost extent, active layer thickness and ground temperature show alarming trends of thaw for 1997 to 2018, whose influence on the long-term export of OC from land to the Arctic Ocean is unexplored so far.
Continuing degradation of permafrost in the catchments in conjunction with projected increases in river discharge will have an impact on the global carbon cycle, aquatic ecosystems and the global climate that is currently difficult to assess due to the lack of time series and process studies. Long-term global ECVs constitute a cornerstone of such assessments and will help quantifying impacts of thawing terrestrial permafrost on aquatic ecosystems and the Arctic Ocean in particular.
Culture studies have repeatedly demonstrated that the parameters describing the photosynthetic response of marine phytoplankton can vary widely under different growth conditions (light, nutrients and temperature) and between species. Yet several remote sensing estimates of marine primary production either assign a single set of parameters for a given region/season or use global empirical relationships (e.g. the maximum photosynthetic rate as a function of sea-surface temperature). Our inability to develop a more mechanistic approach to parameter assignment is due to both an uneven distribution of experimental observations in both space and time and a lack of information on phytoplankton community structure and/or environmental conditions at the time the experiments were made.
One of the aims of the ESA BICEP project is to expand existing global datasets of the photosynthesis-irradiance (PE) parameters. This data mining effort has dramatically improved both the spatial and the temporal coverage of these parameters that are critical to convert maps of surface chlorophyll to estimates of water-column primary production. We have used the > 10,000 experimental measurements and metadata assembled as part of the BICEP project to explore how changes in environmental forcing and the taxonomic structure of phytoplankton communities are related to variability in the PE parameters. Here we focus on ‘regions of interest’ that cover the four ocean biomes defined by Longhurst. These ocean biomes (Coastal, Polar, Trades and Westerlies) represent the primary unit of biogeographic division of the global ocean and provide a useful way of examining difference in variability caused by large-scale changes in environmental forcing. Our dataset reveals biome-specific differences in the relationship between taxonomic composition and phytoplankton photophysiology. By combining flow cytometric counts and HPLC pigment data in the Trades Biome, we show how variation in photoacclimatory status (intracellular pigment concentration and relative concentration of photoprotective pigments) is strongly related to photosynthetic performance. The patterns of variability observed in this study can be used to improve our assignment of PE parameters for satellite-based studies of ocean primary production.
The relevance of the ocean in the global uptake of carbon dioxide has been proved; despite its importance, the carbon cycle is not fully understood, because of its complexity, and it is not clear how climate change can impact on the cycle itself and its efficiency in absorbing
atmospheric carbon.
In the Mediterranean Sea few studies have been carried out, despite its importance as “laboratory basin” and climate change indicator.
The aim of the study is to present an observational platform in the central Mediterranean, the Lampedusa Oceanographic Observatory, which started to provide a large dataset of parameters relevant for the investigation of the carbon exchange between atmosphere and ocean. The available dataset will be used to verify and constrain satellite estimates of pCO2 and CO2 fluxes in the central Mediterranean. whose measurements will be used as a reference for the verification of satellite estimates of the carbon dioxide partial pressure (pCO2 ).
The Lampedusa Oceanic Observatory (OO) is located in open sea in the Southern sector of the central Mediterranean. The buoy, setup in 2015, is located at 35.49°N, 12.47°E, about 3.3 mi South West of the island of Lampedusa, in an oligotrophic area of the Mediterranean.
The closest continental region is Africa, with the Tunisian coast more than 100 km west of the buoy. The buoy is equipped with many above water and immersed sensors for characterization of the radiation regime, meteorology, and oceanic properties.
Starting from October 2021, the following measurements are operational at 5 m depth: CO2 partial pressure; pH, Chlorophyll, CDOM and backscatter; temperature, salinity and dissolved oxygen; downwelling photosynthetic radiation. These measurements complement
additional observations of multi-band down-and upwelling radiation, and temperature, salinity, at various depths down to 43 m.. Additional measurements (meteorological parameters, downwelling solar, infrared, and photosynthetic radiation) are carried out above water (see, e.g di Sarra et al., 2019; Marullo et al., 2021).
The Oceanographic Observatory complements the Atmospheric Observatory (AO, 35.52°N, 12.63°E; Ciardini et al., 2016), setup on the island of Lampedusa in 1997 and dedicated to climate studies. The distance between the two observatories is about 15 km. A wide set of additional parameters related to climate, including aerosol optical depth and chemical composition, cloud properties, deposition, meteorology, greenhouse gases, aerosols, radiation etc.) are monitored at AO.
The first step of the analysis is dedicated to the comparison between in situ and satellite available datasets (mainly temperature, salinity, photosynthetic radiation, chlorophyll).Data from other sites belonging to the Integrated Carbon Observation System (ICOS) research Infrastructure in the Mediterranean will be also used, with the aim to characterize spatial and temporal variation of the in-situ satellite correlations.
The active tectonics in the Tell Atlas of Algeria with thrust earthquakes (Mw>=6) show a significant surface deformation that results from the oblique convergence of the African and Eurasian plates. The aim of our study is to highlight the visible deformation at the surface in terms of coastal uplift and shortening using InSAR technique and time-series analysis.
Over more than 70 km of the active zone of zemmouri affected by the earthquake of 05/21/2003 Mw6.8, where the mainshock epicenter at latitude 36,83N and longitude 3,65E at 10 km depth, having reached 0.75 m in some places [Bagdi et al, 2021]. The expression of the coseismic uplift of zemmouri 2003, have been well described in previous studies, measuring an average of 0.55 m along the coastal zone combined to the geodetic [Meghraoui et al. 2004], and InSAR measurements [Belabbes et al. 2009], although the postseismic deformation following the 2003 Zemmouri earthquake was documented from 2003 to 2010 with Envisat images [çetin et al. 2015] reaching around 3.5 mm/yr LOS displacement using SBAS technique.
Moreover our post seismic displacement study focuses on measuring surface displacement by PSInSAR technique using Sentinel satellites (A/B) from 2016 to 2021. Both horizontal and vertical (subsidence) displacements associated to the postseismic deformation are presented, showing a surface velocity displacement ranging from -5 to +5 mm/yr , and can be correlated to the postseismic tectonic activity affecting the Tell Atlas.
Aquatic plastic litter is a global problem with several dimensions. That’s why the ESA Eyes on Plastic activity provides a service solution that combines multiple technical components into a joint mapping and monitoring solution for plastic in the aquatic environment. By flying high with satellites and diving deep with Remotely Operated Underwater Vehicles (ROV), plastic litter can be monitored at all angles and scales. Based on a thorough requirements analysis with key players in the field the best methods for the satellite and camera based operational analytics were set up and their feasibility tested. Several key challenges are mentioned by all users such as effective and continuous mapping and monitoring of plastic hot spots, and the requirement for a globally applicable solution that provides detailed information. The focus is on rivers as they play a major role for the input of marine litter. Quantitative measures are required that can be compared over time and across sensors.
The proposed service is a holistic mapping approach, which includes different technologies, allowing to respond to the challenges and map plastic in different aquatic environments and varying level or detail and frequency around the globe. This includes the creation of so-called baseline assessments at discrete times, the identification of plastic accumulation hotspots and the continuous monitoring of places-of-interest like river estuaries using Earth observation methods.
We make use of Satellite Earth Observation, namely the Sentinel-2 data, to monitor floating aquatic plastic litter that is automatically classified according to its spectral characteristics. The supervised classifier Classification and Regression Trees (CART) was trained with freely available ground truth data. Then, its performance was assessed and the outcomes were compared with areas where debris is frequently present. This is the case for the Guanabara Bay near Rio de Janeiro in Brazil, where eco barriers are placed in river estuaries. Furthermore, the Marine Litter Project 2021 placed plastic and wooden targets into the water in the Aegean Sea that served as a reference. The classifier predicted plastic litter with a 73% probability and works particularly well in areas with low turbidity and in clear water conditions. Some problems arise due to white wash, which can be falsely detected as plastic litter. Further studies in Guanabara Bay will provide more ground truth data and thus improved classification results, amongst others by images taken from boats that are taken at the same time as the Sentinel-2 overpass. For the camera-based analytics AI has been proven to work well. Different camera systems can be used depending on the application and local setting. In Indonesia, a fixed CCTV camera that is mounted on a pole continuously monitors one of the main tributaries to the Jakarta Bay. In other cases, like in Brazil or local lakes in Bavaria normal mobile phones or sport camera data is analysed. We are using a state-of-the-art convolutional neural network and created our own dataset of 1000 images of floating debris taken in the Pilsensee, Bavaria. As the plastic litter problem is not limited to the water surface the Eyes on Plastic approach also includes underwater camera data analytics. Similarly to our camera systems above water, we deploy an embedded optical object detection system onboard an ROV. This way we can survey the water column to monitor plastic litter concentration on the decimetre scale. The low power consumption opens up a perspective on replacing the ROV with autonomous underwater vehicles (AUV). The analytics results are available via automatic online web application in a standardized reporting output to support local and international monitoring obligations. This allows the users, third parties or even the broader public to access, visualize and analyse all the data which are being measured by the different techniques, in real-time. Eyes on Plastic combines space assets, on-site sensors and platforms as well as novel IT and algorithms to make the difference in understanding and quantifying aquatic plastic litter.
Microplastic pollution is a widely acknowledged threat to ocean ecosystems. However, the global extent and dynamics of this problem are not well monitored or known. Net trawling methods are invaluable for in situ microplastic concentration data collection, but limitations of cost, spatial coverage, and time resolution leave a gap in global microplastic monitoring. Spaceborne imaging of microplastics could address the spatial and temporal sampling limitations, but reliable microplastic detection from space is problematic. A new approach to the detection and imaging of microplastics from space is presented here. Spaceborne radar measurements of ocean surface roughness are used to infer the reduction in responsiveness to wind-driven roughening caused by the presence of surfactant tracers of the microplastics. The physical relationship between the presence of surfactants and the suppression of ocean surface roughening caused by winds has been investigated via a series of controlled wave tank experiments. Varying concentrations of surfactants are introduced onto the water surface, near-surface winds are generated in a controlled manner with variable speeds, and the surface roughness is measured directly using precision ultrasonic surface height detectors. The changes in surface roughness statistics are then used to derive corresponding variations in the radar scattering cross section that would be detected by a spaceborne radar. The results are found to be consistent with the empirical relationship found from the satellite measurements.
Using the satellite observations of roughness suppression on a global scale averaged over a full year, the reduction in roughening is found to correlate strongly with the number density of microplastics near the surface as predicted by several well regarded ocean circulation models. On a global scale over shorter (monthly) time scales, time lapse images derived from the satellite radar observations reveal seasonal changes in the microplastic mass density within the major ocean basin gyres which appear to be related to seasonal ocean circulation patterns. Other dynamic variations in the concentration are also evident which appear to be related to monsoonal precipitation and ocean circulation patterns. On smaller spatial and temporal scales, weekly time lapse images near the mouths of major rivers reveal episodic bursts of microplastic outflow from the river into the sea.
An overview and the current status of our work using spaceborne radar to detect and image ocean microplastic dynamics, and of our attendant wave tank experiments, will be presented.
Accumulations of garbage (e.g. macroplastic) floating on the water surface in coastal and inland waters is an acute problem in many parts of the world. However, there are many different natural phenomena that can also produce surface accumulations. These include cyanobacterial scum, pollen, foam, seagrass leaves, fragments of plants and macroalgae (e.g sargassum). There are also accumulations of material that can be classified as garbage e.g. timber, plastic or other floating material. Our ability to map and recognise the floating material with remote sensing depends on the spatial and spectral resolution of the sensor used and its revisit time. It was demonstrated nearly two decades ago with Hyperion imagery that 30 m spatial resolution is not sufficient for detection of surface accumulations (in that case cyanobacterial scum) if most of the pixel is not covered with the material floating on the water surface. There are large areas in cyanobacterial blooms that can be detected with such medium resolution sensors. However, in many cases, cyanobacterial scum is in narrow filaments formed by currents and wind, and in that case, the spectral signature is similar to subsurface bloom not floating material. The same happens with all floating material if the spatial resolution of the used sensor is not smaller than the width of the filaments. Hyperspectral sensors with very high spatial resolution on aircraft and drones allow better detection and recognition of the floating material. However, the area that can be covered with airborne or drone measurements is very small, the cost per unit of area is very high and the revisit time, if any, does not permit real monitoring of the surface accumulations. Therefore, Sentinel-2 with its 10 m spatial resolution and revisit time of 2-5 days is basically the only sensor that provides frequent and free data for monitoring of different surface accumulations.
The extent and duration of blooms, especially potentially harmful blooms of cyanobacteria, are very important information for monitoring and managing coastal and inland environments. Chlorophyll-a (Chl-a) is usually used as a proxy of phytoplankton biomass. However, many of the standard Chl-a products, like the one provided by the Copernicus Marine Environment Monitoring Service, do not provide sufficient accuracy in optically complex waters (like the Baltic Sea) to allow such analysis. Moreover, Chl-a algorithms allow the mapping of biomass in the water column, but not when the biomass is floating on the water surface. However, it is important to know where the surface accumulations of cyanobacteria are as they may be up to several centimetres thick and contain high amount of biomass. Therefore, monitoring the presence/absence of material floating on the water surface is an important task in coastal water monitoring. Cyanobacterial blooms usually last from the end of June to September and may cover most of the Baltic Sea. However, there are also other materials that can create a “blanket” on the water surface or form narrow filamentous features. During certain periods water may be covered by pollen of different trees, e.g. scots pine (Pinus sylvestris), and it is critical to understand whether the material floating on the water surface is cyanobacteria or pollen or something else as they may occur in the same time frame. We aimed to test to what extent we can separate the pollen from cyanobacterial accumulations on the water surface by using Sentinel-2 atmospherically corrected (C2RCC, C2X, C2X-Complex, Polymer, IDA) and top of atmosphere data. Some band ratio algorithms were also tested. Unfortunately, atmospheric corrections often fail or give insignificant results in the case of strong surface accumulations. The reason is that the neural net type processors work only in the conditions they are trained for. Meaning that they work in the trained range of Chl-a, coloured dissolved organic matter and total suspended matter in water, but cannot cope with any material floating on the water surface. Therefore, it is not recommended to use these atmospheric corrections that remove (or mask) floating material signal, which is mostly in the NIR part of the spectrum.
We have some in situ data from both pollen and cyanobacteria dominated waters and show that it is possible to separate harmful accumulations of cyanobacteria from unharmful accumulations of pollen based on the spectral data of the top of atmosphere using some band ratio algorithms. It needs further testing to evaluate whether other types of surface accumulations (foam, seagrass, macroalgae, timber, plastic, etc.) can be recognised on the water surface using Sentinel-2 imagery. At present we do not have in situ data from other types of surface filaments, but we are planning further sampling campaigns and experiments to assess the potential of recognising them.
Monitoring of Large Plastic Accumulation Near Dams Using Sentinel-1 Polarimetric SAR Data
Morgan Simpson1, Armando Marino1, Peter de Maagt2, Erio Gandini2, Peter Hunter1, Evangelos Spyrakos1, Andrew Tyler1
1The University of Stirling, United Kingdom; 2ESA ESTEC, The Netherlands
1. Introduction: Plastics in the river environment are of major concern due to their potential transport into the oceans, their persistence in aquatic environments and their impacts on human and marine health. It has also been seen that plastic concentrations in riparian environments are higher following major rain events, where plastic can be moved through surface runoff. Dams are known to trap sediments as well as pollutants such as metals and PCBs [1]. Recently, reports of plastic islands accumulating by dams following heavy rainfall have been reported in Balkan Countries. The use of optical data and Synthetic Aperture Radar have both been utilized in the monitoring of chlorophyll-a, Total Suspended Matter (TSM), landslide monitoring and water volumes in reservoir contexts. This study shows results of the ability to detect and monitor these accumulated plastic islands using dual-polarimetric SAR.
2. Methodology: This study focusses on 2 river systems in Serbia and Bosnia where we have validation photographs for the presence and extent of plastic near 2 dams. It has to be said that the patches are mostly composed by plastic, however we also find a sparce presence of wood and other floating materials. In this study we used dual-polarimetric SLC Sentinel-1 SAR data, provided by the European Space Agency (ESA) through the Copernicus Programme. Optical images from Sentinel-2 were acquired, however, cloud cover was 90+% in all images near the date of plastic build-up and therefore they could not be used. Inspecting Sentinel-1 images over the 2 dams for multiple dates we observed that there was a clear and significant backscatter difference near the dams before and after the date of plastic accumulation. To test the detectability of such patches, we initially performed a data analysis visualizing histograms and extracting synthetic statistics for pixels belonging to the plastic patch and clean water. Following this we exploited a range of single-pol and dual-pol detectors. Specifically, we tested a) simple thresholds on VV and HV intensities, change detection using b) single intensities, c) optimisation of power ratio [2], optimisation of power difference [3-4] and Hotelling-Lawley trace [5]. We used Receiver Operating Characteristic (ROC) curves to assess the performance of each detector.
Finally, a time-series analysis was conducted to analyse the occurrence rates of plastic accumulations at the locations mentioned above. From this, a heat map of the plastic accumulations is created to highlight the locations near the dam where marine debris was most commonly accumulating.
3. Results: Histograms of the pixel intensity values from dates of clean and polluted water showed clear differences with expected separability. The ROC shows that the optimisation of power difference provides the best performance, capable of achieving 85% positive detection ratings with a 0.1% false alarm rate. The improvement consequence of this polarimetric optimization is very significant considering that detectors using only VV achieve on average below 50% detection at 0.1% false alarms. Temporal maps show areas where plastic was commonly accumulating and the size of the patches.
4. Conclusions: This study shows the feasibility of detecting large accumulation of plastic near dams. Additionally, it is evident that the use of a single VV polarization is inadequate for this task and PolSAR data are needed. It has to be kept in mind that the accumulation contains also smaller amounts of other floating materials like wood. Hot maps of areas were plastic accumulate the most are useful to plan future interventions. Further studies should be carried out to evaluate the quad-pol behaviour of these patches to understand if some estimation of density is also possible.
Acknowledgement: This work was supported by the Discovery Element of the European Space Agency’s Basic Activities (ESA Contract No. 4000132548/20/NL/MH/hm). Sentinel-1 data were provided courtesy of ESA.
References: [1] Kondolf, G.M., Gao, Y., Annandale, G.W., Morris, G.L., Jiang, E., Zhang, J. et al. (2014). Sustainable Sediment Management in Reservoirs and Regulated Rivers: Experiences from Five Countries. Earth’s Future, Vol 2 (5), pp. 256 – 280.
[2] Armando Marino & Irena Hajnsek, A Change Detector Based on an Optimization with Polarimetric SAR imagery, IEEE Transactions on Geosciences and Remote Sensing, 52(8), 4781-4798, 2014.
[3] Marino, A. & Alonso-Gonzalez, A. (2018). Optimisations for Different Change Models with Polarimetric SAR. EUSAR 2018, 12th European Conference on Synthetic Aperture Radar, Aechen, Germany.
[4] Emanuele Ferrentino, Armando Marino, Ferdinando Nunziata, Maurizio Migliaccio, A dual polarimetric approach to earthquake damage assessment, International Journal of Remote Sensing, 2019
[5] Akbari, V., Anfinsen, S.N., Doulgeris, A.P., Eltoft, T. (2016). A Change Detector for Polarimetric SAR Data Based on the Relaxed Wishart Distribution. 2015 IEEE International Geoscience and Remote Sensing Symposium.
Ocean monitoring is a vast scientific and commercial subject, linked to essential anthropological aspects. First of all, the ocean is a reservoir of biodiversity that remains fragile, and, despite years of studies and numerous missions, we still lack global and precise understanding. It is also a reservoir of resources and a place of economic activities that must be protected from direct pollution (degassing, oil leaks) and indirect (algae blooms). The detection of these pollutions could help to prevent socioeconomics and health issues.
In this context, the needs for ocean monitoring from satellite imaging platforms are emerging. This work aims at proposing a processing chain to perform this monitoring task. Since oceans cover immense areas, searching for litter, debris or any object-of-interest can be seen as an anomaly detection task where the aim is to detect outliers over a vast water area. The proposed approach is divided into several stages starting from basic and fast processing to more complex methods. The underlying idea is to gradually refine the anomaly detection. In an operational context, the first stages could then be performed on board of a satellite, so that only suspicious images are sent to ground.
The first step uses basic radiometric and textural indices to eliminate large areas of water that are easily identified as void of any anomalies. At this point of the processing chain, the slightest ambiguity must be preserved and dealt with in the downstream stages.
In the case of optical images, a second step eliminates cloudy areas using a segmentation deep neural network (DNN). Since it is not necessary to have VHR images to identify clouds, this step can be performed at a degraded spatial resolution, allowing a faster processing of the data.
The final step performs the actual anomaly detection on the remaining areas. Since anomalies are by nature scarce, it is very difficult to gather a database of annotated anomalies large enough to train a DNN. For this reason, the last stage relies on an unsupervised deep learning approach. An autoencoder is trained to compress and reconstruct images of water areas without anomalies. Then, to assess the presence of anomalies, the image is compressed and uncompressed. The reconstruction error (difference between the input and the output of the autoencoder) is then computed. Since the autoencoder is trained to perform well only with a normal water area, a high error indicates the presence of an unusual pattern, potentially an anomaly. After this process, a major part of the data is eliminated, and a last step can be performed to sort the potential anomalies. This step could be done using a clustering method or a semi-supervised method such as a supervised contrastive learning.
The proposed method is tested on two different modalities: very high-resolution optical images from Pleiades satellite at 70cm ground resolution and radar images from Sentinel-1 at 10m resolution. The radar data allow the detections of suspicious ships, floating objects and some effluent pollution such as oil. In the optical domain, metric or submetric resolutions are desirable in order to detect vessels (for example to fight against illegal fishing, degassing) or unidentified floating objects (trees, container, drifting beacons, algae, icebergs). To assess the quality of the results, a subpart of the dataset was analyzed by expert photointerpreters and the output of the approach is compared to their annotations.
Year by year, rivers transport a significant number of macro litter or plastic debris towards the sea, which poses a socioeconomic and health risk. Though, sources, pathways, and sinks of macro plastic debris are not fully understood. To-date, a commonly used method to gather such kind of knowledge is the visual counting method, which lacks automation.
Within the project “Quantification of plastic load in the Rhine river”, a method is under development and tested that can contribute to a more effective and automated, as well as more human resource and time-saving approach. Main aim is the assessment of arising quantities and exploration of pathways of macro litter occurring in the Rhine river. The developed sensor system consists of two synchronized sensors: a hyperspectral sensor operating from 350 to 1000 nm, as well as a high-resolution RGB camera. With this sensor system, imagery data will be collected from bridges in the German part of the Rhine river under natural daylight conditions with an adaptable image capture rate up to one image per second. Theoretically, this system could be utilized from several bridges or elevated locations over water (e.g. lakes, rivers) in Europe and gather data in a similar way as visual counting today, but with a generalized and highly automatic computer vision and deep learning approach.
The imagery data recorded forms the basis for training a convolutional neural network (CNN), which aims at predicting several macro litter item categories, as well as potential false positives as floating vegetation or foam. The categories are mainly selected from the “Joint list of litter categories for marine macrolitter monitoring” by the Publications Office of the European Union (2021), which is frequently used in macro plastic or litter monitoring across Europe. The Deep Learning-based analysis is supported by an in-situ data collection at the Rhine river, which will serve as validation data for the CNN. In future, this approach will allow insights into temporal changes of abundance of various categories of plastics and foster better understanding of the amount and composition of macro plastic litter floating on the surface of the Rhine river – or other waterways.
Monitoring areas closer to plastic marine litter sources such as rivers and estuarine systems is crucial for increasing our understanding of the transportation dynamics and has the potential to improve pollution mitigation strategies. Currently, the scientific knowledge about the source, amount and spatial variability of macro- and microplastic debris in aquatic ecosystems is still limited. In-situ litter point data is an important source of information; however, the collection is costly, labor intensive and only feasible on a small scale. Our central concept is therefore to upscale in-situ data with earth observation (EO) and hydrodynamic models in the coastal wetlands and coastal waters under influence from the Po River, Italy.
The goal of the first project phase is to set up a data baseline. Multi-type in-situ data was therefore collected at various points along the pollution pathway. High-resolution monitoring via drone imagery taken along the shoreline is established for accumulation analyses. Water samples from the river, its estuaries and coastal areas using manta trawls are used to quantify plastic litter abundances. Imagery taken from different types of camera systems installed on bridges or other infrastructure is analyzed using Deep Learning approaches in order to automatically detect floating plastic in rivers for continuous long-term observation of river surfaces. This provides improved inputs to transport models.
Spectral investigation of water surface reflection characteristics by both spectroradiometers and satellites are necessary to scale single point in-situ data to large-area detection of essential water quality variables. High spatial resolution Copernicus satellite data (Sentinel-2 and -3) provide an excellent option to cover both river and coastal systems under influence from the river plume. Therefore, we processed Sentinel-2 and -3 data to extract the total suspended matter and sea surface temperature products in order to enable the detection of the river plume shape. The satellite data is further used to explore the detection of floating macroplastic, which can be identified in spectral measurements in the SWIR by the chemical composition of plastic polymers. This allows the estimation over a large area of plastic exposure along a coastline via the proxy relationship to more easily detected water parameters.
In the continuation of the project, numerical models aided with in-situ and remote observations will be implemented. They present a powerful tool to study the dispersion pathways, identify potential sources and highlight areas potentially at risk of impacts due to floating marine litter.
While we will give an outlook on the model development phase, we would like to focus our presentation on the results of the in-situ survey and the analysis of the collected data. The drone imagery recorded along coastal beaches and within river branches is used to directly detect and quantify macroplastic at the survey locations and thereby identify areas prone to plastic accumulation. This will allow us to present up-to-date maps of macroplastic abundances. Using a manta trawl we were able to collect water samples from the Adriatic, the coastal areas and estuaries of the Po River as well as, for the first time, from within the river. The microplastic abundances currently measured in the laboratory will not only enable us to give a current update on the state of the waterbody but also help to complete our understanding of the distribution dynamics. The in-situ survey showed a high spatial and temporal variability of plastic abundances in the Po River, which highlights the importance of a continuous monitoring system. The numbers of floating plastic pieces in the river per hour obtained by visual counting ranged from around 100 to more than 600 in three river branches as well as the main river. Using different camera systems, we built a training database of more than 2.000 images that is heterogeneous in terms of resolution, illumination, level of disturbance, and the types of river plastic. Deep Learning models such as Faster-RCNN and YOLO are currently optimized to evaluate their detection capabilities in general and across the different types of imagery. The results will contribute to the advancement of automated methods for monitoring plastic pollution.
The integration of these types of in-situ data with multi-scale EO and hydrodynamic modelling serves as the development basis for a spatio-temporal monitoring system of plastic debris in aquatic ecosystems, allowing an end-to-end depiction of real-world debris transport pathways for the first time. Our goal is to contribute to the construction and advancement of such source-to-sink monitoring systems. These would be able to provide precise up-to-date maps of actual and future plastic debris in riverine and coastal areas and aid the identification of environmental, economic, human health and safety-related impacts of plastic litter and would support targeted efforts of both off- and onshore-based clean-up projects by focusing on smaller areas with higher plastic abundances.
Annually, an estimated 4.8 to 12.7 million tonnes of plastic debris end up in the sea. As it only degrades very slowly, amount of plastic in the sea and on beaches worldwide is gradually increasing to ever more alarming levels. Plastic litter has a major negative impact on marine life and can lead to global economic losses. There is a direct impact on fishery, aquaculture, agriculture, energy and shipping sectors through blockage or damage of infrastructure, such as drains, pipes, cages, gear and ships.
Given the seriousness of the problem, its extent is surprisingly poorly known and quantified. Apart from some very broad numbers and the knowledge of some extremely vast garbage patches identified in the open ocean, it is generally believed that there is ‘a lot of’ plastic in rivers and oceans, but nobody knows exactly where or how much it is.
Despite many recent efforts, current methods are not sufficient to provide a good overall view on distribution of marine plastics. Much more extensive monitoring is needed with systematic sampling throughout the year. Macroplastics (with sizes >5cm, thus visible to the naked eye) are believed to be a main source of marine plastic pollution and secondary microplastic. Furthermore, most of the volume is associated with these macroplastics and they decay into microplastics, which are much harder to trace and almost impossible to remove. Therefore, it is crucial to detect and remove the macroplastics before they are broken down in smaller and smaller pieces and in this way mitigate the further generation of secondary microplastics for the decades to come. Current methods largely underestimate macroplastics because often they only measure a limited range of sizes. Larger pieces of plastic are much rarer and often remain undetected. To quantify the abundance and size distribution of larger debris, larger areas need to be monitored.
Unmanned aerial vehicles are are widely adopted as tools for surveying water surface and coastal regions, and image analysis is being used to estimate debris concentrations from the resulting imagery. At a larger scale, detection based on satellite images would be the most efficient way of covering larger areas. However current satellite missions are either designed for low resolution ocean colour applications or for land applications so their capabilities are less than ideal for the purpose of marine litter detection.
Detecting and monitoring marine litter in a systematic way is a challenging ambition as vast areas need to be covered with high spatial resolution. Previous research has indicated that for discriminating marine plastics from other surface features, a set of spectral bands combining the visual, NIR and SWIR spectral range is very useful.
Existing and upcoming hyperspectral mission capture all necessary spectral information, but at much lower spatial resolutions. Most high resolution satellite missions offer spectral bands which are not optimal for marine plastic monitoring. With the notable exception of Worldview-3, they even do not offers any high resolutions SWIR bands at all. Furthermore, the swath of high resolution missions is typically very limited (~13 to 20 km at nadir) so they cannot frequently cover very large areas (e.g. 1000km x 1000km swath).
We have started a short term ESA-BELSPO PRODEX study in which we assess the feasibility of detecting marine macroplastics from space. In the study, we first aim to understand in detail the needs of the main stakeholders acting on marine plastics. With this, we will then define minimal requirements on remote sensing data for performing useful macroplastics detection. Next, we will propose a possible satellite mission concept in line with the requirements, and match it with available technology. We foresee to propose an mission concept which combines an innovative imaging payload design with a smart acquisition scheme/method. We will show the results of the feasibility study, including stakeholder analysis and first mission conceptual design.
Remote sensing has the potential to better quantify and identify the sinks and sources of plastic pollution working that way towards solutions providing better understanding, and enabling better policies and novel ways to tackle the problem. However, the use of Earth Observation for plastic detection still presents a series of challenges. Arguably, the main challenges for plastic detection could be summarised in two issues. Firstly, the constrains of satellite pixel sizes to detect macroplastics and secondly how to exploit the spectra to differentiate plastic polymers from algae or other debris. Pixel resolution, spectral bands and signal to noise ration play a crucial part for detecting larger plastics. Only large aggregations of plastics can be detected using satellite technologies or, alternatively, lower altitude methodologies can be used by fitting sensors in aircrafts or Remotely Piloted Aircraft Systems (aka drones) to quantify plastic debris.
This work will present results from the ESA SIMPLER project (SensIng Marine Plastic Litter using Earth observation) and the OSIP ESA HyperDrone project that aim to give solutions to further our capabilities to detect plastics in riverine and shoreline environments respectively. Both projects undertook field and controlled experimental campaigns and created spectral libraries that will be introduced in the talk.
A field campaign was understaken on the shoreline at Oban Airport (UK) and acquired reflectance observations from different dry plastic targets over its mixed sandy and rocky beach using both in-situ instruments (SVC) and the hyperspectral co-aligned Headwall imager covering the VNIR and SWIR regions (400 – 2500 nm). The reflectance from 15 targets with different composition including polystyrene, polypropylene, nylon, PVC, HDPE... were collected at different altitudes of 30, 60, 90 and 120 metres using the Headwall sensor mounted on a drone platform (DJI M600). We assess how spectra in the SWIR can be exploited for plastic detection along the shoreline via the use of algorithms based on different spectral indices and aiming to establish a threshold for subpixel detection at different altitudes on the shoreline.
In addition, a series of controlled laboratory experiments using the SVC hyperspectral spectrometer were undertaken to collect data from those plastic types. The reflectance of each plastic target was collected with the plastics floating on water as well as partially submerged.
Using the measurements collected and the 6S radiative transfer code, the modelled plastic remote sensing reflectance at satellite altitudes will be presented. Based on the spectral features, the algorithms for plastic detection and the modelled results, precise spaceborne requirements will be assessed for satellite plastic missions. These will include characteristics of the spectral bands and spatial resolution to determine the minimum size of plastics for subpixel detection methods as well as estimates of the corresponding signal-to-noise ratio.
The presence of plastic litters in the coastal zone has been recognized as a significant problem. It can dramatically affect flora and fauna and lead to severe economic impacts on coastal communities, tourism and fishing industries. The traditional reporting protocol is organized through individual transects on the beach, recording the litter's presence. In the new era of drone usage, a new integrated Coastal Marine Litter Observatory (CMLO) is proposed. The CMLO produce automatically marine litter accumulation maps in the coastal area, using drone imagery and deep learning algorithms. The aerial images can be collected through a dedicated protocol for acquiring drone imagery from non-experienced citizens using commercial drones. Once the datasets are collected the user can upload them into a web platform where all the preprocessing and analysis occurs. As a first step, the aerial images are automatically checked for their quality, georeferencing and usefulness. Once the dataset is checked a deep learning algorithm runs for detecting the marine litter. Marine litters are classified into seven categories according to OSPAR identification and categorisation of litter on the beaches and their exact position in the beach is recorded. The last step is the marine litter density maps creation. The resulting density maps are produced calculating the number of individual litters in areas of hundred square meters on the beach. The entire process requires some minutes to run once the aerial data is uploaded online. The density maps automatically are reported to a spatial data infrastructure, ideal for time series analysis. The system depicts all the automatically extracted ML as geospatial information related to i) Concentrations (densities) of ML on various beaches, ii) spatiotemporal visualizations of ML accumulation of every uploaded by the user beach iii) Statistical results of marine litter concentrations for every monitored area.
Classification accuracy calculated against manual identification of 85%. In contrast with most recent studies to train the deep learning algorithms, we used a significantly larger training and validation dataset and the evaluation of the deep learning models' generalization ability realized to a completely new beach environment. Thus, CMLO deep learning detection and classification can be geographically transferred to new and unknown beaches. The Coastal Marine Litter Observatory presents several benefits against traditional reporting methods, i.e. improved measurement of the policies against plastic pollution, validating marine litter transportation models, monitoring the SDG Indicator 14.1.1 and the EU – MSFD Indicator D10C1, and most important, guiding the cleaning efforts towards areas with a significant amount of litter. The proposed marine litter mapping approach can be used towards the need for marine litter data standardization and harmonization. CMLO platform allows interoperability and provides a solution for automatic reporting and time series analysis.
Marine plastic litter has become a global problem, affecting the health of marine ecosystems as well as damaging the economy and activities of coastal communities. Due to an increase in the concentrations of plastics in the marine environment and an uncertainty in the plastic sources, pathways and sinks, there is a need to develop a cost-effective, reliable, repeatable, and large-scale monitoring of plastic litter in the coastal waters. Past studies have used machine learning algorithms on high resolution remote sensing data from aircrafts and unmanned aerial vehicles (UAVs) to successfully detect single plastic items. These methods, however, do not offer the large-scale monitoring needed for national or international monitoring programs. Recently, Sentinel-2 optical satellite data has become a primary focus in floating litter research thanks to its global coastal coverage, 5 days revisit time and spectral wavelengths used in floating litter detection. Although Sentinel-2 offers a solution for cost-effective and large-scale monitoring of floating litter, there are research gaps in terms of identification of naturally occurring floating litter events and developing generalised models that will account for variation in the spectral signature of compositionally varied litter across time and space. This study assesses the feasibility of two machine learning algorithms to create a model for detecting floating plastic litter. More specifically, first we employ validated global litter events, which originate either from in-situ measurements or field expert validation, to create training and testing dataset. Secondly, we apply Random Forest classification using eCognition and Python Scikit-learn machine learning library to train the model for both pixel and object-based (OBIA) image analysis to detect floating plastic litter. In addition, the same procedure is applied on WorldView-3 (WV3) high resolution satellite data and conclusions are drawn about the role spatial resolution plays in the accuracy of floating litter detection. Finally, we compare the Random Forest results with the Deep Learning pixel and OBIA approaches using Python Keras API and Tensorflow. The preliminary results show that Random Forest is a successful way of predicting floating plastic if 1) the plastic accumulations are large enough to create objects for OBIA, 2) floating plastic is dense and compact, 3) the quality of Sentinel-2 data is sufficient. The initial findings from Deep Learning show very high model accuracy, however, further testing is required. The study relates the findings to a possible application of machine learning in near-real-time mapping of floating plastic litter. In conclusion, using satellites to detect plastic patches greater than 10 m at a global level has been agreed as an indicator for marine plastic litter under Sustainable Development Goal (SDG) target 14.1 under the United Nations Environment Programme. As such, this work directly supports SDG 14.1, and contributes towards development of harmonised methods to identify and reduce plastic pollution.
Rivers are main pathways of land-based plastic waste in the world’s oceans. Accurate estimates of river plastic emissions are therefore crucial to better understand the sources, mass balance, and fate of marine plastic pollution (Meijer et al., 2021). In contrast to what is often assumed, most plastics that leak into the terrestrial and riverine environment do not end up in the ocean. A growing amount of observational evidence suggests that plastics in fact accumulate within river systems, and can be retained for years, decades, and potentially even longer. The majority of macroplastics (>0.5 cm) in freshwater systems are hypothesized to accumulate on riverbanks and floodplains, in riparian and floating vegetation, within sediments, or in estuaries. Rivers can therefore be considered as plastic reservoirs (van Emmerik et al., in review). Due to the long time scales of the retention, plastics may degrade and fragment into micro- and nanoplastics, which are in turn more likely to be flushed out of the system. Also extreme events, such as coastal or fluvial floods, may lead to emptying of the plastic reservoir. To better understand and quantify plastic accumulation, (re)mobilization, and fragmentation, large-scale and long-term observations are crucial. Multispectral sensors may provide a new avenue of accurate upscaling of plastic observations over time and space. Recent research shows that macroplastics have a clear spectral reflectance signal, which offers new opportunities for detection and monitoring of plastics in riverine and marine environments using close-range and spaceborne remote sensing techniques (Biermann et al., 2020; Tasseron et al., 2021). In this presentation, we discuss how plastics can be discriminated from water, organic material, and other debris from their unique reflectance spectra and derived indices. We give examples of how these findings can be used to detect and quantify plastic pollution across spatial scales. The latter ranges from experiments under controlled conditions, field applications in river systems, and riverine plastic monitoring using multispectral satellite imagery (Schreyers et al., 2021). Here, we specifically focus on applications in river basins in the Netherlands and Vietnam. Finally, we provide an outlook for future work, including using available satellite imagery for historical long-term assessments, and suggestions for future multispectral remote sensing systems for plastic monitoring. With our presentation, we aim to emphasize the importance of harmonized large-scale and long-term plastic monitoring tools to better understand and quantify plastic accumulation within rivers, and emissions into the ocean.
References
Biermann, L., Clewley, D., Martinez-Vicente, V., & Topouzelis, K. (2020). Finding plastic patches in coastal waters using optical satellite data. Scientific reports, 10(1), 1-10.
Meijer, L. J., van Emmerik, T., van der Ent, R., Schmidt, C., & Lebreton, L. (2021). More than 1000 rivers account for 80% of global riverine plastic emissions into the ocean. Science Advances, 7(18), eaaz5803.
Schreyers L, van Emmerik T, Nguyen TL, Phung N-A, Kieu-Le T-C, Castrop E, Bui T-KL, Strady E, Kosten S, Biermann L, van den Berg SJ and van der Ploeg M (2021) A Field Guide for Monitoring Riverine Macroplastic Entrapment in Water Hyacinths. Front. Environ. Sci. 9:716516. doi: 10.3389/fenvs.2021.716516
Satellite remote sensing has great potential to become a breakthrough in mapping marine litter. One limiting factor for its full development is the access to reliable, extensive, and consistent ground truth of debris ground observations. Today, some of the best performing technologies for image analysis were built using open labelled databases. Ocean Scan integrates an inclusive labelled global ocean plastic database, a web platform and a mobile application. With observations significantly more extensive and geographically more diverse than a research campaign, it enables the scientific community to work globally and tackle the problem collaboratively.
During the past two decades, the amount of in-situ data and information about marine litter has dramatically increased, especially in the last five years. News about the different garbage patches and citizen awareness has also led to the publication of online surveying campaigns and mobile apps to collect data. An increasing number of projects and initiatives that address the issue of marine litter worldwide include local in-situ campaigns for litter collection and identification of litter signatures in aquatic environments through the pairing of remote sensing technologies and Artificial Intelligence (AI) techniques. However, despite the growing effort done in this direction, remote sensing research is experiencing a scarcity of relevant validation data. Although in-situ information exists, the datasets available come with several limitations that ultimately reduce their usefulness in the remote sensing application field.
First, information is distributed sparsely in different databases, often not updated, inaccessible or focusing only on a specific area. Second, in-situ data collection of plastics is usually tackled in a project-specific approach and often by non-scientific organizations or teams not working with remote sensing technologies. Consequently, the methodologies used to collect the data are not standardized. The sampling methodologies design rarely considers the requirements to obtain a reliable and robust ground truth that could allow proper AI and Earth Observation research. For example, existing in-situ databases of marine litter often lack accurate geolocation and temporal (date and time) stamp, essential metadata to perform remote sensing studies.
The increasing possibilities of remote sensing technologies in marine litter research can potentially address the identification of debris, the type of litter, pollution sources, distribution patterns, generating a growing demand for curated ground data to develop and validate the different approaches used. It is to address this urgent need that Ocean Scan was born. The Ocean Scan database brings together global in-situ observations and their matching Earth Observation data in one place. It provides a standardized approach with tools to collect and classify in-situ data. It aims to unlock and promote the potential of Earth observation research in marine litter studies, implementing complete data collection methodologies and fostering collaboration between organizations and researchers across the world, relying on a clear code of conduct.
Ocean Scan offers additional features specifically designed to benefit remote sensing researchers. It provides access to a catalogue via a web platform and API. In-situ observations of marine litter are automatically linked with matching satellite images from Sentinel-1, 2 and 3, based on their location and time stamp. Users have access to selectable baseline options in time, space, sensor type and others, and can visualize all existing observations and campaigns on a global map. It is configured to streamline data ingestion from other systems through the dedicated API or directly ingest observational samples during field campaigns through a straightforward and intuitive mobile application.
Ocean Scan users retain full ownership of their data. A Zenodo DOI is provisioned to every dataset ingested, enabling and supporting credit recognition and data provenance and offering early accreditation for work before the long process of paper approval.
Ocean Scan aims at facilitating and promoting global cooperation across marine litter research, offering a powerful tool not only for research but also for organizations working on marine litter and debris. From a downstream application point of view, Ocean Scan: (i) provides the grounds for extensive studies of remote sensing with AI for marine litter detection and tracking around sources, sinks and pathways; (ii) it provides a hands-on and very practical platform to boost EO studies for marine litter detection, greatly facilitating the collaboration between EO scientists with biologists and oceanographers that monitor and study plastic pollution occurrence in-situ; (iii) it offers a unified hub and harmonized data and metadata format, fulfilling the requirements to be used in AI modeling, in terms of standardization, structure, and size, streamlining data collection and data standardization processes; (iv) it offers tools to facilitate the contributions to a global database by migrating existing data from past campaigns and also by easing the data collection for future campaigns; (v) it supports the launch of targeted data rescuing campaigns at sea; (vi) by promoting and boosting remote-sensing studies it accelerates the understanding of the problem, also supporting the design of tailored solutions on the ground and the design of future remote sensing missions.
Floating aquatic vegetation plays an important role in accumulating and transporting macroplastic debris from rivers into the oceans. Up to 78% of floating macroplastic debris was found entrapped in hyacinth patches in the Saigon river, Vietnam (Schreyers et al., 2021). Water hyacinths are an invasive, fast-growing and free-floating plant that typically forms patches of several meters of width and length. This makes them easily detectable by freely openly available imagery, such as the European Space Agency (ESA) satellites. In rivers, hyacinth propagation is a highly dynamic phenomenon, mainly governed by hydrometeorological factors and nutrient availability. Recent studies have shown that hyacinth seasonal and annual coverages can vary by several factors (Janssens et al., 2021, in review), thus highlighting the need for long-term monitoring. Frequent, long-term and large-scale observations are best maximized by using the Sentinel-1 all-weather technology. In this study, we characterized the seasonality and long-term trends in hyacinth invasion in the Saigon river, Vietnam, using Sentinel-1 data over 7 years [2015-2021]. This allowed to estimate yearly and monthly variability in hyacinth coverage over the entire river system.
The 10 m spatial resolution of the Sentinel-1 sensor, however, is unsuitable to detect macroplastic aggregated within hyacinth patches. We therefore coupled our large-scale mapping of hyacinth invasion with close-range remote sensing of macroplastic-hyacinth aggregations. Unmanned Aerial Vehicle (UAV) images were collected over a one-year period (2021) at the Saigon river. We quantified hyacinth coverage daily, weekly and monthly variability and monitored macroplastic concentrations in and out of hyacinth patches. This allowed for comparison with the Sentinel-1 hyacinth monthly and yearly estimates. This multiscalar approach allows to determine the temporal scales at which hyacinth propagation in tropical rivers is best monitored. In addition, this research is preparatory for estimating annual concentrations and fluxes of macroplastic in rivers using hyacinths as a proxy. These findings can be used to inform hyacinth control and macroplastic debris clean-up and mitigation strategies.
References
Schreyers, L., van Emmerik, T., Luan Nguyen, T., Castrop, E., Phung, N.-A., Kieu-Le, T.-C., Strady, E., Biermann, L., van der Ploeg, M. (2021). Plastic plants: Water hyacinths as driver of plastic transport in tropical rivers. Frontiers in Environmental Science 10.3389/fenvs.2021.686334
Janssens, N., Schreyers L., Biermann L., T.-K., van der Ploeg, M., van Emmerik, T. Rivers running green: Rivers running green: Water hyacinth invasion monitored from space. [In review]
Plastic marine litter is becoming an increasing threat to our planet, with considerable impact on the oceans that also receive mismanaged plastic waste from the land and rivers. In oceanic waters, plastic marine litter often accumulates in remote sub-tropical gyres, such as the Great Pacific Garbage Patch. In the recent years, several studies have investigated and described the diversity and complexity of plastic marine litter issue: among the investigated aspects, it has been highlighted that plastic marine litter is mainly distributed in the upper 5 m of the water column, depending on the specific properties of the object. This implies the need to address also techniques able to acquire data in the water column, not only for microplastics ( < 5-mm) but also for less abundant macroplastics ( > 5-mm) below the water surface. Under this respect, backscatter LIDAR technique can offer the advantage of a good penetration into the water column and the possibility to detect also objects that are not floating on the water surface and, in principle, to define their shape and volume. On the other hand, the working principle of the technique does not allow for a discrimination of plastic marine litter from other types of debris.
Despite several papers proposing the backscatter LIDAR technique as a tool with a potential for the detection of marine litter, only very few studies have addressed its feasibility and only sporadic data acquired in real marine scenarios are available in the literature. As a consequence, at present the potential of backscatter LIDAR technique for addressing the marine litter issue in marine scenarios has not been explored.
The present paper aims at providing a contribution towards an understanding of actual capabilities of the technique for marine litter remote sensing. The paper describes an airborne campaign carried out on the Great Pacific Garbage Patch and the data acquired by using a circular scanning backscatter LIDAR system along two transects, each of which 600-km long. The objective of the work is to present the results obtained by processing this extensive dataset of LIDAR data with respect to marine litter detection in oceanic waters, to discuss main advantages and drawbacks as well as to provide a way forward for future experiments.
This study was funded by the Discovery Element of the ESA's Basic Activities, contract no. 4000132184/20/NL/GLC.
The Marlisat project was funded through a Remote Sensing of Plastic Marine Litter competition on the European Space Agency (ESA) Open Space Innovation Platform. The aim is for satellite Earth Observation (EO) to detect the source locations of marine plastics, then trajectory modelling supplemented by information gained from developed floats (that act as a large piece of accumulations of plastic) is used to predict where the plastic will end up. If the plastic returns to the coast as marine litter, then the EO data can be further used to confirm the presence of accumulations.
This presentation focuses on the use of satellite EO to detect the sources and accumulations around the coastline of Indonesia. The baseline detection is performed using a combination of Copernicus Sentinel-1 and -2, utilising an approach developed by Page et al. (2020, https://www.mdpi.com/2072-4292/12/17/2824) for on-land detection of plastic and tyre waste sites. The Machine Learning (ML) approach has been extended to utilise a Neural Network (NNet) instead of or in addition to the original Random Forest approach. A training dataset has been generated from sites around the planet, including legal and illegal waste sites alongside known sites of marine plastic accumulation. This dataset is being continually grown as new sites are identified through web articles or personal communication.
The automated processing code runs the chosen trained ML model over a specified location and time range, creating a map of detected plastic locations. Before applying the ML model, pre-processing is optionally used to reduce false detections from known sources of error that can be site-specific. For example, Indonesia has high cloud coverage with artefacts left behind after conservative cloud masking while detecting waste near the greenhouses in Southern Spain is affected by radar shadow. Ongoing work is focused on achieving a consistent temporal accuracy and approach for how certainty may be quantified going forward, with the first version of the dataset being generated in December 2021.
In addition to using Copernicus data, higher spatial resolution Planet and ICEYE data have been identified and will be used to both confirm the findings and also for testing fused inputs where a higher spatial resolution ML input is achieved by sharpening the Sentinel products.
Abstract
Plastic pollution is one of the largest anthropogenic threats to the marine environment of this century, with plastics representing over 80% of human-made debris present in the oceans . Approximately 12 million tonnes of plastic waste enter our oceans annually, posing a significant threat to marine ecosystems . Although plastics enter the marine environment through riverine and coastal sources or direct disposal, it is widely acknowledged that rivers play a crucial role in the transportation of ocean plastic pollution; acting as the arteries that carry waste from land to ocean. The global contamination of plastic pollution poses an issue for policymakers as it is not constrained by national boundaries, instead it is transported by water and air currents where it congregates at river mouths and coastal cities.
At an international scale, motivation for addressing the issue of plastic pollution is mounting and there are a plethora of agreements relating to maritime sources of plastic waste. However, at present, there is no overarching legally binding agreement addressing the land-based sources of marine litter, particularly with measurable reduction targets to limit future plastic emissions. One limitation on the implementation of such a policy is the absence of a consistent global monitoring capability to ensure compliance with, and monitor the effectiveness, of current regulations, as well as supplying critical information regarding the status of marine litter to support the creation of new strategies. Furthermore, at local and national scales, there has been much development and mobilisation over the last 5-10 years by both for-profit and non-governmental organizations (NGOs) who have focused on the clean-up of plastic waste. These work in a range of geographical regions and are focused on the collection, removal, and management of marine and riverine litter. The ability to prevent and mitigate plastic pollution locally and nationally varies by nation and region and is heavily dependent on resource availability for waste and behaviour change management.
To date few studies have focused on a reliable solution for identifying locally specific dense clusters of plastic waste to target operations. Consequently, a consistent monitoring service could enable these organisations to streamline their cleaning efforts towards the areas with the greatest marine litter densities, enabling them to collect substantial volumes of marine litter whilst reducing running costs, improving efficiency and encouraging positive behaviour change. A robust monitoring system, which is globally applicable, would therefore assist plastic policy at a regional, national, and international level, accelerating the pace and scale of the response to plastic emissions.
Over the last year, CGG Satellite Mapping, supported by the European Space Agency’s Space Solutions, and in collaboration with Mott Macdonald and Brunel University London, conducted a 12-month feasibility study to identify and monitor floating macro to mega marine litter in fluvial and coastal environments using Earth Observation (EO) data. Sustained observation via remote sensing offers distinct advantages for determining the marine plastic debris mass balance due to its extensive area coverage and frequent observation. The ability to detect large aggregations of floating plastics via EO data will support a better understanding of the sources, pathways, and trends of litter in the marine environment, before it becomes entangled, ingested, fragmented, or degraded. The study focused on identifying “hotspot” locations of large aggregations of floating marine litter, monitoring the source location and frequency of accumulations, and analysing the size and distribution of the material. The parameters provide input into local drift models to improve knowledge of the spatio-temporal distribution of floating debris.
The study evaluated the extent to which current and planned remote sensing technology matches the spatial, spectral, and temporal scales required for marine plastic debris observations in river and estuary environments. Three case study locations in Europe, SE Asia and the Caribbean were examined using a range of EO data and processing techniques. The expectation for marine litter (density, location, composition) in each of these settings is different and therefore the resolution, monitoring frequency, spectral range, and platform for data acquisition needs to be specifically targeted for each setting. The suitability of freely available, open-access EO data over each site was assessed, as well as high resolution commercial data as and when required to alleviate problems associated with cloud cover and weather conditions. In addition, in-situ ground truth (e.g. samples or photographs) was also used when available to validate the EO data, with the support of local NGO’s and waste management organisations. The study also analysed environmental data, such as precipitation and wind information, to support the understanding of the movement of marine litter within these environments and transboundary migration of plastics. Synergy with other technologies, such as higher resolution drones or HAPS, can be helpful to initially locate and identify small marine litter accumulations that can be subsequently monitored using EO systems.
With close engagement from a broad group of end users, CGG plan to develop a marine litter monitoring system to support local waste management programs, increase awareness and provide feedback on the long-term effects of environmental waste management initiatives in river and estuary environments. The satellite-derived system has the potential to complement and work in tandem with international policy efforts to combat marine plastic pollution and provide a transboundary solution to a global problem. It is hoped the system will assisting monitoring the specific, measurable, and time-bound targets set by the international community to reduce plastic emissions into the marine environment.
Most progress in remote sensing of marine plastic litter has been made in ocean colour sensing (OCS) where optical physics applies. It has become clear that OCS, based on surface reflectance of sunlight, would benefit from complementary measurements using different technologies. Surface-leaving thermal infrared radiance (TIR) has significant potential for monitoring macroplastic floating on the water surface. For example, TIR sensing does not depend on sunlight and can look through light snow and rain. Plastic materials that are transparent or a dark colour are difficult to see in the optical spectrum but may appear opaque and easier to detect in the thermal spectrum.
We will show the results of drone surveys flying a FLIR (forward-looking infrared) camera over different plastic litter targets floating at sea. The FLIR camera senses in long-wave infrared (LWIR), 7.5 – 13.5 μm. We performed surveys during day and night, and in summer as well as winter, to cover a range of temperature and light conditions. During daylight surveys, visible and near-infrared cameras recorded concurrently for comparison. The resulting datasets show the potential for using thermal imaging to monitor floating plastic, especially for night-time when optical wavelengths are ineffective without a light source. Dependence on the different temperatures of the plastic targets, air and the sea, and cloudiness of the sky, complicates interpretation of thermal images. Consequently, some locations, seasons and times of the day will be better suited to TIR sensing of floating plastic litter than others. These may well be under conditions where OCS is limited, for example during the Arctic winter.
The methodology applies to plastic litter floating on top of the water surface, as water absorbs LWIR in the first mm. It will therefore not be suitable to monitor plastic debris below the water surface. The method was tested in coastal waters and is proposed as transferable to freshwater bodies such rivers and lakes, and snow and ice. Future investigation will test TIR sensing on beached plastic litter pollution to investigate performance over shorelines and further study of the capability to differentiate between different kinds of floating matter. The presented techniques provide improved detection of floating plastic litter from aerial or drone-based measurements, and applicability to inform potential future satellite-based TIR remote sensing, as ground resolution improves.
Rivers function as major pathways of transport of plastic litter, from land-based sources into the ocean. Efforts to quantify riverine plastic inputs and fluxes are increasing but are currently hindered by limited observations. In the downstream travel of floating plastic, it accumulates at places where the flow is locally reduced, to form larger patches. In this study we investigate such accumulations of floating debris using images from different satellite sensors. Applying this monitoring method to rivers enables us to detect plastic litter before it reaches the oceans.
Although current satellite mission concepts were not specifically designed for the detection of plastic debris, there is potential for some sensors to be utilized in the detection of plastics. With support from the ESA Discovery Campaign, we have access to the range of different satellite sensors needed to develop a multi-sensor monitoring method for detecting floating plastic litter. These satellite datasets, together with the data that we are collecting in an onsite clean-up initiative, help to fill-in the lack of observation data that are needed to advance data science techniques for plastic detection. Time-lapse images and photographs in combination with waste sampling at the accumulation hotspot site enable characterisation of percentage areal coverage of floating debris on the water surface and waste composition.
First results comprise a prototype software for extracting statistics on floating debris based on time-lapse camera images. The underlying algorithm uses the difference in brightness between the background water surface and the floating debris objects to be detected. The software produces a video highlighting debris passing through a selected zone within the frame, and generates statistics on the total number of frames used for the analysis, the total number of debris items, total area (in pixels or, if geo-referenced, in areal units), maximum flux of debris items and maximum areal flux of debris items.
The in situ results support the development and improvement of the debris detection from satellite data and the validation of the resulting plastic maps. Ultimately, these techniques will enable us to validate model estimates of riverine fluxes, to improve our understanding of global riverine plastic fluxes from source to sea, and to contribute to integrated assessments of the state of pollution. Another outcome will be recommendations for informing ESA’s future satellite missions by utilizing our results of the plastic detection capacity of existing sensors to inspire future mission design.
Water hyacinths play an important role in gathering and transporting macroplastic litter in riverine ecosystems. These fast-growing, free-floating, and invasive freshwater plants tend to form large patches at the water surface, which makes it possible to detect and map them in freely available imagery collected by the European Space Agency (ESA) satellites. In polluted rivers, hyacinth patches may thus serve as a viable proxy for macroplastics. However, at the∼10m spatial resolution offered by the Sentinel-1 and Sentinel-2 satellites, it’s not possible to discriminate smaller items of plastic caught up within large plant patches.
For the first time, we demonstrate that river plastics are detectable in higher spatial resolution optical satellite data. In the Saigon River around Ho Chi Minh City, plastic debris was successfully detected within hyacinth patches using MAXAR’s Worldview-3 multispectral optical (∼1.24) and panchromatic imagery (0.31m). For the optical data, we selected the ACOLITE atmospheric correction algorithm and applied a novel detection index that leveraged Worldview's near infra-red and red bands to highlight differences between vegetation, debris, and river water. Applying a local normalization (moving average) over the scene also served to reduce contributions from the highly turbid background water. This approach allowed for detection of river plastics within hyacinth patches floating downstream from populated areas towards coastal waters.
COLOR (CDOM-proxy retrieval from aeOLus ObseRvations) is an on-going (kick-off: 10/3/2021) 18-month feasibility study approved by ESA within the Aeolus+ Innovation program. The objective of COLOR is to evaluate and document the feasibility to derive an in-water AEOLUS prototype product from the analysis of the ocean sub-surface backscattered component of the ultraviolet (UV) signal at 355 nm. In particular, COLOR focuses on the AEOLUS potential retrieval of the diffuse attenuation coefficient for downwelling irradiance at 355 nm (Kd(355)). As Kd(355) is highly sensitive to the absorption due to CDOM (Chromophoric Dissolved Organic Matter), it can be used as a proxy to this variable, which contributes to regulate the earth’s climate.
To assess the quality of in-water Kd(355) coefficients retrieved from AEOLUS, the largest currently-available database of in situ UV radiometry distributed across the global ocean is that provided by the Biogeochemical (BGC)- Argo array. BGC-Argo floats provide autonomous measurements of downwelling irradiance (Ed) at 380 m of the upper 250 m of the ocean, every 1 to 10 days. These profiles are quality-checked with specifically-designed procedures, and then Kd(380) coefficients are derived. In COLOR, seven areas representative of a variety of trophic and optical conditions have been identified to carry out product validation: 1) North Atlantic subpolar gyre; 2) North Atlantic subtropical gyre; 3) South Atlantic subtropical gyre; 4) Black Sea; 5) North Western Mediterranean Sea; 6) Levantine Sea (Mediterranean Sea); 7) Southern Ocean – Indian sector. These areas are also representative of the global distribution of CDOM, and meet diverse meteorological conditions (e.g., cloudiness) that could impact AEOLUS’s data availability and retrievals.
Being BGC-Argo radiometry data in the selected areas available since 2012, two validation strategies will be applied in COLOR:
a) match-up analysis between BGC-Argo and AEOLUS, for the period beyond 2019;
b) climatological comparison of AEOLUS and BGC-Argo Kd data, i.e., including a statistically significant number of observations for each area that encompasses the whole expected seasonal variability.
Preliminary results of the validation of AEOLUS CDOM-proxies will be here presented.
Marine Heat Waves (MHW) can impact on marine organisms directly affecting their optimal thermal ranges and indirectly via changes in ocean biogeochemistry. Marine ecosystems may as a result modify their functions, with their resilience not assured. Space-borne observations provide a unique tool to detect such extreme events and observe changes in sea surface biology. Developing synergies with robotic autonomous observations from Biogeochemical (BGC)-Argo floats will allow monitoring marine ecosystems and ocean biology before, during and after a MHW from the surface down to the ocean interior.
Under the ESA’s Ocean Health initiative, the “deteCtion and threAts of maRinE Heat waves – CAREHeat” project will evaluate changes and resilience after MHW events of marine biodiversity and biogeochemistry of pelagic ecosystems around the globe, including lower to higher trophic levels. To achieve this, CAREHeat will exploit BGC-Argo optical observations, Ocean Colour satellite measurements and biogeochemical models. In particular, CAREHeat will develop synergies between these multiple observational platforms to address the following scientific questions:
1. What is the effect of MHW on phytoplankton chlorophyll concentration at the ocean surface and along the water column?
2. Are phytoplankton chlorophyll changes related to modifications in phytoplankton biomass or physiology?
3. How is the phytoplankton community structure affected by MHW?
4. How do community structure changes impact on ocean biogeochemistry and propagate over the water column affecting nutrient profiles and oxygen levels?
5. How do changes at lowest trophic levels impact on carbon fluxes in support of higher trophic levels (micro-nekton, apex predators)?
6. Is there a biogeochemical signature in the pH and air-sea CO2 fluxes during all MHW events, and to what degree of MHW severity is such signature becoming significant?
We will present preliminary results on detected MHW events at the global scale, and the observational strategies and datasets we will adopt to assess the impacts on marine ecosystems.
The profiles of light and its spectral distribution link to most of the physical, chemical, and biological processes prevailing in the water column. Here, we present vertically resolved light models of downwelling irradiance (ED) and photosynthetically available radiation (PAR) for the global ocean by merging light profiles with satellite ocean color radiometry products and physical (temperature and salinity) properties prevailing at the location of light profiles. The present work is inspired from the SOCA (Satellite Ocean-Color merged with Argo data to infer bio-optical properties to depth) methodology originally proposed by Sauzède et al. (2016). SOCA is based on an artificial neural network methodology, and more especially on a Multi-Layer Perceptron (MLP). The present light models rely on SOCA type MLP and are trained with light profiles (ED/PAR) acquired from the Biogeochemical (BGC) Argo floats as outputs. The inputs of the MLP consist of surface derived from satellite ocean color radiometry products extracted from GlobColour (Rrs, PAR and kd490), temperature and salinity profiles from BGC-Argo as well as temporal components (day of the year and local time in cyclic transformation). The output for each model corresponds to ED profiles at the three wavelengths of BGC-Argo measurements (380, 412, and 490 nm) and PAR profiles.
The quality of the light profile retrieval by these models is assessed using two different and independent datasets: one is based on independent BGC-Argo profiles that are not used for the training; the other originates from SeaBASS. These light models show satisfactory predictions when compared with real measurements. The estimated accuracy metrics for these two validation datasets are consistent and demonstrate the robustness of these light models for global ocean applications. More details and procurements of this study will be discussed during the presentation.
Keywords: Global Ocean light models, ED380, ED412, ED490, PAR
The NASA-led EXPORTS (EXport Processes in the Ocean from RemoTe Sensing) project seeks to quantify the fate and export of carbon from the euphotic zone via the biological carbon pump. The strength of the biological carbon pump can be assessed in part by the rate of net community production (NCP), the sum of all photosynthetic productivity minus respiratory losses of carbon. In a net autotrophic system, this excess fixed carbon is available for export to the deep ocean, where it can be sequestered from the atmosphere on decadal to millennial time scales. Two field campaigns were conducted to capture the end members of a range of ecosystem/carbon cycling states: the productive North Atlantic spring bloom in May 2021, and the iron-limited subarctic North Pacific in August 2018. Ship-based operations were bolstered by both satellite observations and numerous autonomous assets, including two BGC-Argo floats at each site, supported by NSF, NOAA, and NASA. The floats carry biogeochemical sensor suites (e.g. CTD, O2, NO3, pH, bio-optics) to enhance the spatiotemporal sampling range and produce budgets of oxygen, nitrate, and particulate organic carbon. Here we present a comparison of NCP measured in situ by BGC-Argo floats to satellite- and hybrid float-satellite-based estimates of NCP during the EXPORTS field campaigns. Float-derived NCP employs a mass balance approach using high-resolution oxygen and nitrate data collected by autonomous floats to determine NCP in the euphotic zone. Satellite-based estimates of NCP are made using algorithms trained on the oxygen-argon ratio anomaly that utilize observed sea surface temperature and modeled net primary productivity. Net primary productivity rates were determined via the Vertically Generalized Production Model (VGPM), the Carbon-based Production Model (CbPM), and the Carbon, Absorption, and Fluorescence Euphotic-resolving Model (CAFE) algorithms. These model algorithms are implemented with both satellite-only and integrated float-satellite inputs to explore the potential of a synergistic approach between BGC-Argo and remote sensing capabilities. We discuss how our results compare across estimation methods, link to the ship-based measurements made during the field campaigns, and how they reflect the distinct nature of the study sites’ cycling regime.
Oceanic mesoscale eddies account for more than 40% of the ocean surface, playing an important role in marine mass transport, energy exchange, and air-sea coupling. On account of their vital effects, mesoscale eddies have attracted lots of attention from oceanographic scientists. As an increasing noticed topic, mesoscale eddies’ biological role at the sea surface has been revealed gradually. Benefiting from the emergence of Biogeochemical Argo (BGC Argo) floats, it becomes possible to explore eddies’ biological influences at the subsurface, which contributes an increasing number of studies about eddies’ biology in recent years. A crucial result that anticyclonic eddies (AEs) and cyclonic eddies (CEs) can induce contrasting sea surface chlorophyll (CHL) anomalies has been revealed from a global point of view, also in a few regional open oceans, such as the Pacific and the Southern Ocean. As eddies’ biological effects in the North Atlantic are not understood enough, our study focuses on eddies’ influences on phytoplankton and zooplankton by analyzing observations from satellites, BGC Argo, and cruises comprehensively. The results derived from multi-satellite merged Ocean Colour CCI products show that eddy-induced sea surface CHL anomalies vary with latitude in the North Atlantic, related to eddy properties. At the surface, eddies’ effects on CHL at the subtropical and mid-latitude regions are primarily driven by Ekman pumping and eddy pumping respectively. Results derived from BGC Argo illustrate that both Ekman pumping and eddy pumping are obvious in the midlatitude subsurface water. While at the subtropical region, eddy pumping is dominant in the subsurface water. Statistics from BGC Argo also illustrate AEs/CEs tend to decrease/increase the subtropical chlorophyll maximum (SCM), and lower/raise its depth both in subtropical and midlatitude regions. Continuous Plankton Recorder (CPR) data reveals eddies’ influences on zooplankton, with the abundance of copepods in CEs higher than in AEs at daytime, consistent with eddies’ surface CHL concentrations. Besides, the diel vertical migration (DVM) of copepods is found more evident in AEs than in CEs. Particle backscattering observations from BGC Argo illustrate the obvious DVM in AEs may not be related to the CHL concentrations at the subsurface, but is an active choice of zooplanktons. Comprehensively analyzing satellite observations, BGC Argo profiles, and CPR samples, this study reveals eddies’ effects on plankton in the North Atlantic, which deepens our understandings of eddies’ biological role.
Since year 2000, 2 million temperature and salinity profiles have been collected by the Argo program with unprecedented spatial and temporal coverage. Since 2010, thanks to a new generation of profiling floats (e.g. BGC Argo floats equipped with chlorophyll_a, downwelling irradiance, backscattering, nitrate, optode ...) and in particular to iridium telemetry, the acquisition frequency has dramatically increased, providing a new understanding of the dynamics of the float displacement in the water column.
The possibility to extract information related to sea state from the analysis of high-resolution measurements of pressure data linked to float motion is investigated here. Particular focus is put on the study of the speed anomaly close to the surface, compared to a nominal speed expected for a calm sea state. The comparison between speed anomalies of floats in the Mediterranean Sea and concurrent sea state measurements provided by a weather buoy in the same area, implies that float behaviour could be an indicator of sea state.
This relationship, applied to the Argo database, offers the unique opportunity to have an in-situ estimation of the sea state with the spatial and temporal coverage of the Argo database, with possible application for Earth observation satellites like SENTINEL-3. The Significant Wave Height (SWH) and wind speed from SENTINEL-3 or from the different altimetry satellites (TOPEX, JASON-1, ERS-2, ENVISAT and GFO – GEOSAT) will be compared to sea state proxy extracts from the Argo floats.
The first decade of Argo will also be examined in order to determine whether extreme weather events can be detected despite the low vertical resolution of the data acquired over this period.
Finally, results will be presented from floats equipped with an inertial unit which measures the tilt and rotation of the float. How this type of sensor, easily implemented on Argo floats, can improve the sea state estimation compared to the previous method, will be investigated.
Evaluating the spatio-temporal distribution of phytoplankton is critical for assessing the impact of climate change on the marine biogeochemistry and food web (Fennel et al., 2019), along with the ocean-atmosphere exchanges and carbon cycle (Falkowski, 2012). Phytoplankton abundance and composition (as indicated by chlorophyll-a concentration), which are essential for estimating primary production, can be detected and quantified using optical sensors (Bracher et al., 2017). These can be operated on ship-towed undulators, ship-based inline systems (e.g., Bracher et al., 2020) or autonomous platforms such as satellites (e.g., Mouw et al., 2017) and profile floating (e.g., BGC-Argo, see Sauzède et al., 2015). Combining these disparate data sources remains a major difficulty due to varying temporal and spatial resolution and insufficient definition of uncertainty. Data fusion, feature extraction, and other machine learning approaches have been successfully used to overcome the aforementioned issue in different applications such as urban area mapping and change detection(Palubinskas and Reinartz, 2011; Palubinskas, 2012). Accordingly, the objective of this study is to develop a complete data processing chain for combining various Phytoplankton Functional Type (PFT) datasets and associated uncertainties at various spatial and temporal scales. The research focuses on data acquired during the RV Polarstern PS113 expedition (10 May to 9 June 2018) along the Atlantic transect from the Patagonian shelf to the English Channel. These datasets consist of: 1) PFT retrieved from ship-towed vertical undulating Radiometer (Bracher et al., 2020), 2) PFT retrieved from AC-S flow-through sensor (following Liu et al., 2019) and 3) full resolution Sentinel-3 OLCI PFT retrieved by Xi et al. (2021) algorithm. The applied PFT retrieval technique on these datasets is the spectral feature extraction by decomposing the spectral data based on empirical orthogonal functions (EOF) for PFT chlorophyll-a concentration estimation (Bracher et al., 2015). The final product has a spatial resolution of around 300 meters and a temporal resolution of about 3 days. This study emphasizes the potential of developing synergy between space-based ocean observations and in situ biogeochemical sensors. The generic chain process developed can be applied to similar sensor data from expeditions, profiling floats or gliders. Future research should be focused on improving the temporal resolution, reducing the uncertainty and introducing depth information as the fourth dimension into this product.
Satellite Ocean Colour Radiometry (OCR) is an unprecedented tool to understand marine ecosystems and monitor their response to climate change at global scale. Nevertheless, this tool requires high quality in-situ data for calibration and validation and is also limited to the observation of near surface waters. In this context, the BioGeoChemical (BGC) Argo network has demonstrated its power, firstly, to produce radiometric data useful for Cal/Val activities, and secondly, to produce additional data which can be used as a complement to increase the potential of satellite observations or to allow their extension at depth to obtain a 3D view of the ocean. In this context, the recent integration of the first hyperspectral irradiance sensor on BGC-Argo profilers will create a new field of synergy with space-based measurements. For Cal/Val activities, the use of hyperspectral sensors will allow the production of validation data in line with current (ASI’s PRISMA) and future (e.g., NASA’s PACE) satellite products. Hyperspectral data can also be used, in conjunction with multispectral satellite data, to improve the identification capabilities of phytoplankton groups at surface and at depth with associated societal impacts in the context of climate change, resource management and biohazard surveillance (i.e., harmful algal blooms).
We will present here the technical aspects of the integration of the Ramses sensor (Trios GmbH) on BGC-Argo profilers with a particular focus on the energy aspects (impacting the lifetime of the profiler), the acquisition frequency and the resulting volume of data (impacting the data quality but the operational cost). Initial results will be presented for floats deployed in the Mediterranean and Baltic seas as well as a first method to carry out quality control of these data will be shown. The quality of the data obtained will be compared to other existing radiometric sensors mounted on the same floats and in particular to the OCR500 sensors (Sea-Bird Scientific) used today on the global array of BGC-Argo profilers . A first inter-comparison of the results obtained in the two deployment areas will be presented and discussed. Finally, we will present the perspectives of such a sensor for the BGC-Argo network and the synergy that would result from it for space applications. In particular, we will present the possibility of equipping floats with two Ramses sensors to measure downwelling irradiance (Ed) and upwelling radiance (Lu) in order to obtain hyperspectral reflectance.
Since climate change is directly impacting the Arctic, landscapes underlain by permafrost are warming and experiencing increased thaw and degradation. The increased warming of organic-rich frozen ground is projected to become a highly relevant driver of greenhouse gas into the atmosphere. Retrogressive Thaw Slumps (RTS) are dynamic thermokarst features which result from slope failure after ice-rich permafrost thaws. Active RTS are characterized by steep headwalls up to tens of meters high and dynamic slump floors that can be several hectares in area, mobilizing thawed sediments, carbon, and nutrients into downstream en-vironments. While they are small-scale features they can reach considerable annual growth rates, impacting the immediate surrounding abruptly and irreversibly. Thousands of RTS have been inventoried in northwestern Canada associated with regions where buried glacial ice is melting in thawing permafrost. These inventories showed that thaw slumping substan-tially modifies terrain morphology and alters the discharge into aquatic systems, also result-ing in infrastructure instabilities and ecosystem changes. Most RTS occur along coast- and shorelines leading to changes in optical and biogeochemical properties of aquatic systems which can have severe consequences on the aquatic food web. Furthermore, recent studies revealed increased temporal thaw dynamics of RTS in northern high latitudes and projected that abrupt thermokarst disturbances contribute significant amounts of greenhouse gas emissions.
As observed in most Arctic regions, RTS have been developing in the Russian High Arctic. However, research on RTS here has been focusing on northern West Siberia, where industri-al development required mapping of potential landscape hazards resulting from permafrost thaw. In most other regions of the Russian High Arctic, RTS occurrence and distribution is poorly known so far. The objective of this study is to better understand growth patterns and development rates of RTS at high temporal resolution in Arctic Russia using remote sensing data for the last decade (~2013 to 2020).
We investigated five different sites comprising hundreds of square kilometers in the contin-uous permafrost zone of the Russian Arctic. Our sites are located on Novaya Zemlya, Kol-guev Island, Bolshoy Lyakhovsky Island and Taymyr Peninsula. To investigate changes in RTS numbers and extent, a GIS-based inventory of manually mapped RTS was created. The in-ventory is based on multispectral imagery of very high-resolution satellite sensors, including PlanetScope, RapidEye, Pleiades and Spot. Cloud-free images were obtained between 2013 and 2020, for each or every few years depending on image availability. Additional datasets such as the ArcticDEM, ESRI Satellite basemap, and Tasseled Cap Landsat Trends were used to support the mapping process. From the extracted polygons, changes in RTS number and surface area were calculated. Beside this, for coastal slumps thermal denudation and ther-mal abrasion rates were computed using the DSAS tool in ArcMap.
First results provide evidence that the inventory allows to quantify the planimetric devel-opment of RTS at the studied sites over time and further show that thaw slumps have be-come in most cases more active, increasing in size and number, in recent years. At Kolguev we retrieved thaw slumping rates along two sections of the west coast. The further north located slumps reveal average thermal abrasion rates of 1.3 m/yr and average thermal den-udation rates of 3.9 m/yr between 2013 and 2020. For the same time period we discovered that the further south located slumps show average thermal abrasion rates of 5.2 m/yr and average thermal denudation rates of 2.9 m/yr. We will report rates for all sites and compare these with respect to various environmental settings. Our approach gives a first insight about the variability and magnitude of slumping observed across the diverse settings in the Russian High Arctic permafrost region.
The data will contribute substantially to our understanding of regional permafrost thaw in the Russian High Arctic and will be further useful to identify local thaw dynamics and possi-bly permafrost characteristics. Our data also allows us to examine the volumetric loss of sed-iments, ice, and carbon associated with abrupt permafrost thaw by RTS which is crucial for the assessment of greenhouse gas emissions. In addition, the dataset provides valuable ground truth information for training and validation of Deep Learning Approaches for map-ping RTS.
Permafrost is a key indicator of global climate change and hence considered an Essential Climate Variable (ECV). Current studies show a warming trend of permafrost globally, which induces widespread permafrost thaw, leading to near-surface permafrost loss at local to regional scales and impacting ecosystems, hydrological systems, greenhouse gas emissions, and infrastructure stability. Especially the understanding of abrupt, rapid permafrost thaw dynamics, unfolding within merely a couple of days to years and impacting the landscape irreversibly, such as thermokarst formation, lake drainage, and retrogressive thaw slumps, are of high relevance as their projected greenhouse gas emissions, including methane and carbon dioxide, are substantial.
Permafrost is defined as the thermal state of the subsurface but is greatly influenced by changes in the surface state, which is tightly connected to the atmosphere, biosphere, geosphere, and cryosphere by topography, water, snow and vegetation. Hence, examining changes in the surface state will help to identify regions that are particular vulnerable to permafrost thaw. Our primary aim is to investigate changes in the surface state by assessing positive and negative feedbacks to the surface state that potentially influence permafrost and thus derive an index for permafrost vulnerability to thaw.
Earth observation (EO) based datasets provide great opportunity to analyse relevant variables impacting the surface state and obtain trends and changes from long-term consistent datasets. Relevant variables for assessment are land surface temperature, land cover, snow cover, fire, albedo, soil moisture, and information on the freeze/thaw state, which are all ECVs as well, and are available globally following ESA CCI and GCOS product developments. Furthermore, two modelled permafrost_cci products are available for comparison: ground temperature and active layer thickness. However, so far, a combined assessment of these products to better understand, quantify, and project permafrost changes and trajectories is still missing.
Therefore, the objective of this ongoing project is to develop a permafrost vulnerability framework which focuses on the surface state, including the above listed ECVs. By conducting spatiotemporal variability analyses of the individual ECVs, correlation assessments among them, and decadal trend analysis, a better understanding of their positive and/or negative feedbacks will be established. Combining the feedback results of the ECVs in a vulnerability assessment will help identifying prevailing trends in the surface state and evaluating consequences for the thermal state of the permafrost.
Preliminary results show that the individual ECVs show differing trends in the spatiotemporal variability analysis, indicating positive and negative feedbacks. The results will be incorporated in a circumpolar Arctic permafrost vulnerability assessment, integrating the coupled feedbacks and determining their combined effect on the thermal state of the permafrost.
The resulting new permafrost vulnerability index will give a more comprehensive and spatially detail-rich understanding of circumpolar permafrost vulnerabilities and their magnitude. It will indicate areas that are particularly vulnerable to experience thaw and hence highlight areas of particular importance for close monitoring. The circumpolar Arctic permafrost vulnerability index dataset will be a great foundation for a wide range of permafrost-thaw focus studies, such as hydrological change, infrastructure stability, ecosystem change or greenhouse gas emissions, as well as useful for qualitatively assessing the permafrost-climate feedback.
With the Earth’s climate rapidly warming, the Arctic represents one of the most vulnerable regions to environmental change. Permafrost, as a key element of the Arctic system, stores vast amounts of organic carbon that can be microbially decomposed into the greenhouse gases CO2 and CH4 upon thaw. Extensive thawing of these permafrost soils therefore has potentially substantial consequences to greenhouse gas concentrations in the atmosphere. In addition, thaw of ice-rich permafrost lastingly alters the surface topography and thus the hydrology. Fires represent an important disturbance in boreal permafrost regions and increasingly also in tundra regions as they combust the vegetation and upper organic soil layers that usually provide protective insulation to the permafrost below. Field studies and local remote sensing studies suggest that fire disturbances may trigger rapid permafrost thaw, with consequences often already observable in the first years post-disturbance. In polygonal ice-wedge landscapes, this becomes most prevalent through melting ice wedges and degrading troughs. The further these ice wedges degrade; the more troughs will likely connect and build an extensive hydrological network with changing patterns and degrees of connectivity that influences hydrology and runoff throughout large regions. While subsiding troughs over melting ice wedges may host new ponds, an increasing connectivity may also subsequently lead to more drainage of ponds, which in turn can limit further thaw and help stabilize the landscape. Whereas fire disturbances may accelerate the initiation of this process, the general warming of permafrost observed across the Arctic will eventually result in widespread degradation of polygonal landscapes. To quantify the changes in such dynamic landscapes over large regions, remote sensing data offers a valuable resource. However, considering the vast and ever-growing volumes of Earth observation data available, highly automated methods are needed that allow extracting information on the geomorphic state and changes over time of ice-wedge trough networks.
In this study, we investigate these changing landscapes and their environmental implications in fire scars in Northern and Western Alaska. We developed a computer vision algorithm to automatically extract ice-wedge polygonal networks and the microtopography of the degrading troughs from high-resolution, airborne laserscanning-based digital terrain models (1 m spatial resolution; full-waveform Riegl Q680i LiDAR sensor). To derive information on the availability of surface water, we used optical and near-infrared aerial imagery at spatial resolutions of up to 5 cm captured by the Modular Aerial Camera System (MACS) developed by DLR. We represent the networks as graphs (a concept from the computer sciences to describe complex networks) and apply methods from graph theory to describe and quantify hydrological network characteristics of the changing landscape.
Due to a lack of historical very-high-resolution data, we cannot investigate a dense time series of a single representative study area on the evolution of the microtopographic and hydrologic network, but rather leverage the possibilities of a space-for-time substitution. We thus investigate terrain models and multispectral data from 2019 and 2021 of ten study areas located in ten fire scars of different ages (up to 120 years between date of disturbance and date of data acquisition). With this approach, we can infer past and future states of degradation from the currently prevailing spatial patterns and show how this type of disturbed landscape evolves over time. Representing such polygonal landscapes as graphs and reducing large amounts of data into few quantifiable metrics, supports integration of results into i.e., numerical models and thus largely facilitates the understanding of the underlying complex processes of GHG emissions from permafrost thaw. We highlight these extensive possibilities but also illustrate the limitations encountered in the study that stem from a reduced availability and accessibility to pan-Arctic very-high-resolution Earth observation datasets.
The Essential Climate Variable (ECV) “Permafrost” is characterized by the variables “ground (subsurface) temperature” and “thaw depth”, i.e. the maximum depth of the seasonal thaw layer. The Permafrost_CCI project by the European Space Agency (ESA) has compiled Earth Observation (EO) based products for the permafrost ECV spanning the last three decades. As ground temperature and thaw depth cannot be directly observed from space-borne sensors, we have ingested different satellite and reanalysis data sets in a ground thermal model, which makes it possible to quantify permafrost state changes in Arctic and high-mountain environments.
The Permafrost_CCI algorithm uses remotely sensed data sets of Land Surface Temperature (MODIS LST) and landcover (ESA Landcover_CCI) to drive the transient permafrost model CryoGrid_CCI at 1km spatial resolution. To gap-fill LST time series and account for the influence of the seasonal snow cover, ERA-5 reanalysis data are employed. Furthermore, ERA-5 reanalysis is used to force the model for the period before 2003 when MODIS LST is not fully available, but we apply a pixel-by-pixel bias correction using the overlap period after 2003-2019 to achieve coherent times series. The correct representation of ground properties is critical for the performance of the transient algorithm, in particular for reproducing the depth of the thaw layer. Therefore, the Permafrost_CCI project has synthesized typical subsurface stratigraphies for the different CCI landcover classes, based on a large number of analyzed soil pedons from different permafrost areas. Finally, the Permafrost_CCI algorithm performs not only a single run per pixel, but simulates subpixel variability with an ensemble accounting for the typical variations in snow depth and ground stratigraphies. From the model ensemble, it is possible to infer the fraction covered by permafrost in every 1km pixel and thus reproduce the well-known zonations of sporadic, discontinuous and continuous permafrost. We report on the performance of the year 3 product, validated by a variety of field observations. Finally, we discuss the possibility to improve the performance by ingesting further satellite products, such as remotely sensed snow covered area, in the processing chain.
C-band SAR observations have been proven of high value for characterization of spatial patterns of wetlands and soil organic carbon content across the Arctic. Specifically of interest are acquisitions under frozen conditions as they reflect surface roughness and volume scattering only what can be combined with multispectral data such as Sentinel-2 for enhanced landcover descriptions (classifications of vegetation communities and wetlands, vegetation height retrieval etc.).
A range of disturbance factors which cause uncertainties in the retrieval have been, however, identified. This includes meteorological conditions during the acquisition which lead to changes in the overlying snowpack in winter and other disturbances before the acquisition (at unfrozen conditions) related to natural hazards, including fires and landslides.
Several sites across the Arctic have been selected to quantify the impact of disturbances on the retrieval of landcover and soil characteristics. They cover continuous to discontinuous permafrost. Preacquisition disturbances are considered starting from the 1980s. Focus is on Sentinel-1 following a space for time concept. Results demonstrate that there is considerable influence what needs to be considered for carbon cycle related landcover characterization across the Arctic.
Earth System Models to predict permafrost degradation are currently considering gradual permafrost changes only. However, various rapid permafrost thaw processes are known from field and remote sensing studies. Especially over the past decade, an increase in rapid permafrost degradation has been observed with temporally and spatially high-resolution remote sensing. These processes involve loss of excess ground ice, surface subsidence and erosion, and unlock so far frozen soil carbon. Under accelerating climate warming, it is therefore important to understand and quantify the underlying dynamics in space and time and work towards integrating such observations and process models to enhance climate predictions. A major obstacle in integrating rapid permafrost thaw processes in Earth System Models is the spatial confinement and abrupt initiation of thermokarst and thermo-erosion, which are difficult to predict due to the complex and not yet fully understood interactions between different processes and the underlying spatial heterogeneities. Further complexity is caused by the resulting landscape changes altering drainage patterns and vegetation growth and inducing accelerating and decelerating feedback mechanisms. Including all process interactions at the required resolution and scale in a full 3D transient model is therefore infeasible and model simplifications are required. Combining remote sensing and modelling can help in two ways (i) to understand and define relevant processes and parameters and (ii) to fine-tune the model setup and parametrization for reproducing thaw induced landforms.
Here we present a preliminary study and conceptualization of combining remote sensing and permafrost modelling using the permafrost landscape model CryoGrid. The goal is to better understand and predict the impact of rapid permafrost degradation processes applied to the New Siberian Islands in the Russian High Arctic. Until now, only few studies have covered the New Siberian Islands due to their remote location and the consequent lack of available ground truth data as well as the reduced amount of remote sensing data due to frequent cloud coverage. However, similar as observed on other High Arctic Islands in Canada, where mean air temperature increase was amplified due to extensive sea ice loss and resulted in widespread decay of near-surface permafrost, the New Siberian Islands are expected to have also been affected by substantial warming and permafrost thaw over the last decade. The recent increase of available remote sensing data has therefore made it an area of particular interest for studying permafrost degradation. Predominant permafrost degradation landforms found on the New Siberian Islands include retrogressive thaw slumps along coastal sections and melt ponds on ice-rich Yedoma uplands drained by a network of thermo-erosion gullies and embedded in degraded, Baydzharakh-patterned, slopes. We aim at using these rapid thaw landforms to quantify rates of thaw at two different scales: (i) at the process scale of thaw slumps, we plan to better understand and predict volumetric loss of ice content and (ii) at the catchment scale, we will study the interaction between altered drainage patterns and permafrost thaw.
As a first step, we use multi source remote sensing data (e.g. optical and SAR; Hexagon, Sentinel 1 & 2, Landsat and VHR imagery) and deep learning for a detailed landscape characterization and to map thaw slump evolution and melt pond expansion as well as their interconnection to the extensive network of thermo-erosion gullies. Furthermore, we quantify topographical and volumetric changes extracted from multitemporal ArcticDEM data. In a next step, we analyze spatial patterns and temporal changes and correlations to other environmental drivers (e.g. weather extremes, climatic changes, ecosystem, hydrological changes) with the goal to evaluate relevant additional processes (e.g. mechanical-erosion, drainage) and site or landform specific parametrization to be included in the permafrost model CryoGrid. Based on this, we evaluate different options to assimilate observed key landscape parameters such as slope characteristics, subsidence, topographic roughness and drainage patterns into the modelling process to derive (i) improved parametrizations and (ii) ground ice contents through inverse modelling. As a last step, different options will be assessed to predict future permafrost degradation including stochastic approaches to tackle the spatially and temporally abrupt initiation of permafrost degradation landforms. Finally, our conceptualization should allow being transferrable to other permafrost regions in order to improve process understanding and prediction of rapid permafrost degradation at larger scale.
Drained lake basins (DLBs) are common landforms in lowland permafrost regions in the Arctic. Drained lake basins (DLBs) are often the most common landforms in lowland permafrost regions in the Arctic (50% to 75% of the landscape). However, detailed assessments of DLBs including distribution, abundance and spatial variability across scales are limited. A recently published data set (Bergstedt et al., 2021) provides a Landsat-8 based statistical assessment DLB occurrence, focusing on the Alaska North Slope. In this study we focus on the added benefit of higher resolution satellite imagery, specifically the imagery available through Sentinel-1 and Sentinel-2. Higher resolution imagery allows for an in-depth assessment of possible uncertainties of the underlying Landsat-8 based DLB data product. The combination of Synthetic Aperture Radar (SAR, Sentinel-1) and multispectral imagery (Sentinel-2) allows us to take into account a range of surface cover characteristics. The Landsat-8 based classification provides an ‘ambiguous’ class, describing areas that could not confidently be classified being a DLB or not. For this we focus on selected areas in the Arctic, covering different cases of possible uncertainty. Possible uncertainties may be tied to mixed pixels at the edges of DLBs, other landforms, such as seasonally flooded connections between existing lakes, being misclassified as DLBs and gaps in the input data sets. A high-resolution analysis of the spatial distribution of DLBs in lowland permafrost regions is important for quantitative studies on landscape diversity, wildlife habitat, permafrost, hydrology, geotechnical conditions, and high-latitude carbon cycling. Specifically models and upscaling efforts concerning carbon cycling and gas fluxes require detailed information on landscape features and disturbance processes, some of which can be inferred from DLB mapping efforts. Therefore, an in-depth analysis of possible uncertainties is of high importance.
Landcover information does not only provide insight into above ground conditions such as vegetation communities, it is also of high value as proxy for sub-ground conditions. Such information is urgently needed at high spatial resolution and adequate thematic content for Arctic permafrost regions in order to parameterize models (permafrost models, ESM) and for studies with focus on climate change impact assessment.
A prototype for an Arctic landcover description has been developed in ESA DUE GlobPermafrost and has been derived for various sites across the Arctic for evaluation with focus on above ground conditions (vegetation communities). The retrieval is currently reassessed in the context of ESA Permafrost_cci (1) considering updated user requirements, broadening the potential range of applications and revisiting needs by ESM and flux upscaling approaches (also considering initiatives such as AMPAC), (2) taking better into account sub-ground conditions using soil in situ data, and (3) transferring the retrieval to a machine learning approach.
The developed scheme fuses Sentinel-1/2 data acquired since 2015. Results are further combined with a recently developed dataset on Arctic settlements and infrastructure (SACHI dataset from H2020 Nunataryuk) in order to differentiate natural from artificial barren area.
The status of the dataset development including first circumpolar assessment results will be presented.
Northern high latitudes are under rapid change as the climate is changing and warming. Methane (CH4) emissions from the high latitudes, especially from the Arctic and subarctic areas, involve open questions and considerable uncertainties as the common Arctic circumstances are changing due to warming. Globally, the main natural source of methane is wetlands; while the tropical wetlands contribute most of these emissions, high-latitude wetlands are associated with significant uncertainties, especially in future projections. High seasonal temperature variations and snow cover over frozen ground are common features for the high latitudes, and the high-latitude wetlands are partly located in the permafrost regions. The methane emissions from a specific high latitude wetland depend on the soil properties and conditions. Previous in-situ and ground-based studies have shown that frost and snow cover over frozen ground has both direct and indirect effects on the wetlands as a methane source.
We study the dependencies of the environmental drivers, for example frost and snow, and column-averaged methane at Northern high latitudes. We concentrate on the satellite observations, but we use, in addition, in-situ measurements and ground-based total column measurements to support the analysis. The column-averaged methane (XCH4) observations from the Greenhouse Gases Observing Satellite (GOSAT) and the Tropospheric Monitoring Instrument (TROPOMI) onboard Copernicus Sentinel-5 Precursor Satellite will be used as the main methane data sources. To detect the soil freezing, we use the soil freeze-thaw (F/T) product which applies observations from the European Space Agency’s (ESA) Soil Moisture and Ocean Salinity (SMOS) satellite. Snow extent and snow properties will be studied, for example, from IMS Daily Northern Hemisphere Snow and Ice Analysis from the United States National Ice Center (USNIC) and snow clearance day data. The ground-based total column CH4 will be obtained from high latitude Total Carbon Column Observing Network (TCCON) sites.
As a result, we will assess the connections of seasonal snow and frost to the seasonal cycle of XCH4 in a larger scale over the boreal and Arctic regions. These areas are scarcely and sporadically covered with in-situ measurements and therefore satellite observations expand the opportunity to study the remote areas that can play a significant role as a methane source to the atmosphere.
The Arctic and boreal regions are experiencing a rapid increase in temperature, resulting in a changing cryosphere, increasing human activity and potential increase in the high-latitude methane emissions. Sentinel 5P TROPOMI observations provide an unprecedented coverage of XCH4 in this region, compared to previous missions or in situ measurements. We present a systematic comparison of three TROPOMI methane products – operational and the scientific SRON and WFMD products – focusing exclusively on high latitudes above 50 degrees North. We evaluate the seasonal coverage over the continuous and discontinuous permafrost regions, reflecting the potential of TROPOMI to inform inversion models on changing emissions in these regions. We also evaluate biases by carrying out comparisons to high-latitude TCCON. Although the accuracy and precision of both products are good as compared to the TCCON, a persistent seasonal bias in TROPOMI XCH4 (high values in spring, low values in autumn) is found for all satellite products. We make an effort to distinguish and analyse the albedo effects from snow cover and the changes in the CH4 profile shape caused by high-altitude depletion of methane in the polar vortex. Comparisons to atmospheric profile measurements with AirCore carried out in Northern Finland support the analysis and help validate prior profiles used in the retrievals. We also present a comparison of inverse model results from Carbon Tracker CTE-CH4, which show that these seasonal biases may have a significant impact on the fluxes. Moreover, we directly compare regional patterns in XCH4 for all three TROPOMI products. We find that the differences in the regional comparisons can be larger than the differences found against ground-based references, which highlights the importance of the availability of several XCH4 products for a more reliable interpretation of spatial patterns, anomalous values and understanding of the potential origin of high-latitude biases, especially when the validation data are very limited.
Characterized as a crucial phenomenon of permafrost, the annual freeze-thaw cycle is visibly intensifying as global temperatures increase in the Arctic region. Importantly, melting permafrost potentially creates positive feedbacks for future climate warming, since frozen soil locks away more than twice as much carbon as currently exists in the atmosphere. Monitoring changes in the thickness of the uppermost layer of the permafrost which suffers seasonal melting - the active layer thickness - is recognized as indispensable for permafrost status assessment. Previous studies have identified that satellite differential interferometry SAR (DInSAR) has a competitive advantage on spatial coverage over in-situ observations. However, almost all DInSAR applications over permafrost regions suffer from poor coherence caused by complex dynamic ground processes. In this novel study, the performance of the three primary interferometric schemes were examined and compared – persistent scatterer (PS), short baseline (SBAS), and intermittent SBAS (ISBAS, now known as APSIS). They were implemented across a northern coastal region of Alaska which includes the town of Utqiagvik (Barrow) using 4-years of Sentinel-1 acquisitions 2017-2020 during intervals of seasonal thawing (typically May-September). The efficiency of the SBAS scheme has been presented in previous studies for permafrost monitoring at the temporal scale ranging from seasonal to decadal, unlike the PS or APSIS schemes. The potential causes of unwanted decorrelation (e.g. atmospheric delay, soil moisture content, precipitation) were scrutinized, as well as against different tundra landscapes in the study area. Clear seasonal spatial and temporal patterns of ground deformation associated with permafrost melt were seen in the DInSAR results, and which were consistent across the three schemes - and with GPS ground-station results. Although this study is the first application of the APSIS scheme for Arctic permafrost monitoring, the results show that APSIS provided the best performance in both accuracy and spatial coverage. Across the 3-year span 2017-2019 the average ground subsiding velocity was 1mm/year (2017-2019). However, inclusion of the fourth year (2017-2020) jumped this up to a remarkable 4 mm/year. We associate this latter result with rapid permafrost melt due to the extraordinarily high temperatures seen across the Arctic in 2020 (tied with 2016 for the warmest ever year).
The permafrost region contains about twice the amount of carbon as Earth’s atmosphere. Under the ongoing accelerated Arctic warming, permafrost is expected to increasingly thaw, leading to its decomposition and the release of greenhouse gases, and in particular methane, which has a global warming potential about 80 times that of CO2 over a 20-year period. This thawing of permafrost and release of greenhouse gases is exacerbated by the increasing frequency and intensity of wildfires at high latitudes. Coastal subsea permafrost regions as well as reservoirs of methane hydrates in the Arctic could also contribute to the release of additional methane into the atmosphere. The new EU Arctic policy released in October 2021 recognizes the urgency to improve our knowledge of these processes.
Our understanding of methane emissions from the Arctic has been very limited due to the sparsity of in-situ measurements, the lack of data from passive backscatter SWIR (Short-Wave InfraRed) instruments during the polar night and general retrieval issues for high solar zenith angles, frequent cloudiness, and difficulties to retrieve over dark surfaces like sea and snow. So far, no clear trends could have been established for the Arctic regions based on satellite measurements, and it is urgent to understand whether this is due to instrument limitations, and whether methane emissions will increase in the upcoming years. Specifically, our knowledge of emissions during the winter period needs to be improved.
In this context, the MEthane Lidar missioN (MERLIN), which has been proposed and co-funded by DLR and CNES, could be of interest. The mission is currently in its phase D, with a launch readiness foreseen for 2027. MERLIN will employ an IPDA (Integrated Path Differential Absorption) nadir-viewing Lidar instrument in a near-polar sun-synchronous orbit to actively measure the column-weighted dry-air mixing ratio of methane (XCH4). This mission will enable detection of methane in all seasons and latitudes during day and night, and it has a low sensitivity to thin cirrus clouds and aerosol scattering. We will present plans for investigating opportunities enabled by MERLIN and the foreseen airborne campaigns with the demonstrator CHARM-F onboard the HALO aircraft to improve total column methane retrieval over the Arctic and boreal regions, including issues over coastal areas, lakes, and snow, and the potential for measurements under broken cloud conditions. In the context of determining surface fluxes, the role of global and Arctic regional inverse modeling, measurements from ground stations, and SMOS freeze/thaw data for constraining the winter period fluxes will also be discussed.
ESA DUE Globpermafrost (2016-2018) and the ESA CCI+ Permafrost (2018-2021) focus on the processing of ready-to-use data products derived from remote sensing data that support permafrost-related research. Within the first funding period a wide range of GlobPermafrost remote sensing products were processed: Landsat multispectral index trends (Tasseled Cap Brightness, Greeness, Wetness; Normalized Vegetation Index NDVI), Arctic land cover (e.g., shrub height, vegetation composition), lake ice grounding, InSAR-based land surface deformation, rock glacier velocities. Additionally, spatially distributed permafrost model output with permafrost probability and ground temperature per pixel were developed. The focus on ESA DUE projects is to ensure that all data products processed meet user requirements. To make products visible we established WebGIS projects using WebGIS technology within maps@awi (http://maps.awi.de), a highly scalable data visualisation unit within AWI’s data workflow framework O2A (from Observation to Archive). GIS services have been created and designed using ArcGIS for Desktop (latest Version) and finally published as a Web Map Service (WMS), an internationally standardized format (Open Geospatial Consortium (OGC)), using ArcGIS for Server. The project-specific data WMS as well as a resolution-specific background map WMS are embedded into a GIS viewer application based on Leaflet, an open-source JavaScript library. Therefore, we developed project-specific visualisation of raster and vector data products adapted to the products’ specific spatial scales and resolutions. This resulted in an ‘Arctic’ WebGIS visualising circum-artic products, as well as small-scale regional WebGIS projects like ‘Alps’, ‘Andes’ or ‘Central Asia’ that visualize e.g. higher spatial resolution products like rock glacier movements. The GIS viewer application was adapted to interlink all GlobPermafrost WebGIS projects, and especially to enable their direct accessibility via the GlobPermafrost Overview WebGIS.
Beside remote sensing derived data products, the locations of the WMO GCOS ground-monitoring networks of the permafrost community, the Global Terrestrial Network for Permafrost GTN-P managed by the International Permafrost Association IPA were add as feature layer. All resulting ESA GlobPermafrost WebGIS presented on several User workshops and at conferences, were being continuously adapted in close interaction with the International Permafrost Association IPA. The GlobPermafrost data products are already DOI-registered and archived in the data archive PANGAEA provided by AWI.
Whithin the framework of the ESA CCI+ Permafrost project a newly WebGIS was added: a time-series webGIS comprising of the CCI+ Permafrost circum-arctic model output for Mean Annual Ground Temperature (MAGT), Permafrost Extentand Probability (PEX), and Active Layer Thickness (ALT), for a more than twenty years period of time. All data products are available in a yearly resolution, as well as calculated averages of MAGT, PEX and ALT of the time series. Fortunately, the new time-series WebGIS built on the already established time-series visualization capabilities developed in-house within the technical WebGIS-infrastructure maps@awi at AWI.
Climate change is severely affecting the Northern high latitudes with many drastic changes expected. Efficient monitoring systems to detect and quantify such changes are essential to assess their impact on future global climate trajectories. Two primary methods for monitoring atmospheric carbon are with in situ observations, e.g. atmospheric towers, and satellite remote sensing. In this study we perform a series of case studies to assess the capabilities of in situ towers and both passive and active space-based missions to detect deviation from currently observed emission patterns, particularly signals associated with expected disturbance processes.
This signal detection study follows a 3-step approach: a ground truth is generated by transporting known fluxes in a 4D atmospheric transport model, from which synthetic observations are generated and signal detection limits are computed. These observations as well as the baseline nature runs are produced using the Goddard Earth Observing System model (GEOS). To simulate tower measurements, time series are extracted for single grid cells, Gaussian noise is added to these synthetic observations within the measurement precision defined by the WMO, and a range of transport errors is tested. Two satellite measurement techniques are modelled: an active sensor using an integrated-path differential absorption lidar (based on the future DLR/CNES mission MERLIN), and a passive sensor using a wide-swath nadir-viewing imaging spectrometer (based on TROPOMI on S5P). Here, in addition to random errors related to measurement precision, biases due to seasonality, latitudinal gradient, albedo, aerosol load, surface pressure and topography are considered.
We examine two disturbance scenarios: In the first scenario we simulated enhanced methane release from expected Yedoma thaw. The second scenario models enhanced methane fluxes from Arctic Ocean shelf ebullition. We use a variety of signal detection metrics to differentiate between a baseline case and the disturbance scenario runs, including varying levels of signal amplification. We compare the ability of tower and satellite measurements to detect high latitude methane changes and find that despite having errors an order of magnitude higher than ground-based measurements, satellite measurements, especially MERLIN, have similar realistic detection limits while granting superior spatial and often temporal coverage.
Arctic permafrost lowlands are wetlands often characterized by overall high methane emissions during summer. However, at the local scale high methane emissions are usually linked to specific land cover (LC) classes while other classes have very low emissions or none at all. Therefore, a detailed characterization of the vegetation composition of such lowlands and their heterogeneous mosaic of LC types and associated methane fluxes is necessary to quantify overall landscape-scale methane fluxes. In addition, ongoing climate change in Arctic lowlands impacts the drying or wetting of classes and results in either gradual shifts between classes or abrupt changes due to disturbances such as shore expansion or lake drainage. These changes are expected to affect the methane budget of Arctic permafrost landscapes. A crucial uncertainty for future carbon cycle projections is the quantitative understanding of the magnitude and speed of such changes affecting the methane cycle of Arctic permafrost regions. We here describe a new approach for methane emission upscaling for the Arctic Lena Delta building on a remote sensing-based, dynamic LC classification taking into account gradual and abrupt LC changes for the period 2000-2020.
The Lena Delta (72.0–73.8° N, 122.0–129.5° E) is the largest Arctic river delta (~29,000 km2) and is underlain by continuous permafrost. The Lena Delta has been a focus area for German-Russian methane research from Arctic permafrost landscapes for the last 25 years and a wide range of observational methane datasets were collected. In prior research, a static LC classification based on a 30m-resolution multispectral Landsat-7 ETM+ image mosaic, composed of three summer images of July 2000 and 2001, provided a first insight in delta-wide LC classes and associated methane fluxes (Schneider et al., 2009). Since then, the rapid growth in remote sensing resources and processing capabilities as well as another decade of methane field data collection now opens the opportunity for an enhanced quantification of LC change and its effects on landscape-scale methane fluxes in the Lena Delta.
The lowland tundra landscapes of the delta are divided in three major geomorphological units: the first terrace comprises the modern and Holocene delta floodplains, the second terrace comprises a Pleistocene fluvial deposition area in the NW part of the delta, which is largely fluvially inactive, and the third terrace comprises of Pleistocene ice-rich Yedoma permafrost uplands and deeply incised thermokarst lakes and basins.
Our LC classification approach consists of two main steps: 1) development of a static LC classification using the rich land cover training data available for the central Lena Delta region, and 2) development of a dynamic multi-temporal LC classification based on the knowledge from the static LC map in combination with remote sensing time series data for the 2000 to 2020 period across the entire delta.
First, we performed a static LC classification for the central Lena Delta building an initial classification on training data of Elementary Sampling Units (ESUs) that included i) 30 x 30 m vegetation plots from field work in summer 2018 and ii) additional ESUs assigned from comprehensive field knowledge from numerous Russian-German expeditions to the central Lena Delta. This first robust LC classification was used to train the Google Earth Engine (GEE) aggregated 10m resolution Sentinel-2 satellite data from summer 2018. The LC classes were optimised to capture LC classes defined by landscape wetness and vegetation types with the goal of upscaling field-observed methane fluxes from these classes. LC classes are also linked to low and high disturbance regimes in different terraces and landscape settings in the Lena Delta, allowing a grouping into several main classes that can be associated with rates of carbon cycling. For example, classes with a high disturbance regime tend to experience higher carbon accumulation rates and faster cycling of above-ground carbon. In total, 13 classes were differentiated (11 vegetated classes, 1 water class, 1 barren sand class) for the static LC map.
Second, based on the static LC classification we further extended the classification scheme to a dynamic model using additional satellite data for 2000 to 2020 (Sentinel-2, and Landsat-5, -7 and -8) to assess and characterize dynamic LC changes at 30 m resolution in annual and 5-year periods (2000-2005, 2006-2010, 2011-2015, 2016-2020). A flexible GEE classification pipeline was used to allow for dynamic classification schemes. Multi-sensor composite medoid mosaics were derived from cloud-free summer (July-August) imagery for the different time periods under assessment. Input for the classification were the visible, near-, short-wave and mid-infrared multispectral bands, the maximum NDVI, and elevation data from the 2m resolution Arctic DEM. Training data was derived from the static 2018 LC classification and then a random forest classifier was applied for the classification.
In addition to these dynamic LC components, also providing 13 classes, we further applied stratification to distinguish selected classes important for methane emissions that were spectrally difficult to differentiate with the multispectral input data from Landsat and Sentinel-2. This includes (1) the differentiation of an additional class of wet polygonal tundra in drained thermokarst lake basins that was spectrally similar (but functionally different) to wet polygonal tundra on yedoma uplands of the third terrace using an ArcticDEM-based extraction of basin landforms, (2) the differentiation of lentic water bodies into lakes of different size categories building on the experience of the Boreal-Arctic Wetland and Lake Database (BAWLD)(Olefeldt et al., 2021), and (3) the differentiation of lotic water bodies into deep and shallow delta channels according to their water depth as detected by winter Sentinel-1 SAR data (Juhls et al., 2021).
Our 20-year time series indicates a partial reduction in wet polygonal LC classes in the Lena Delta, which is particularly visible in the central Lena Delta on the ground ice-rich yedoma upland surfaces of the third terrace. Here, the class “Polygonal tundra complexes with up to 50 % surface water” decreased in area (-4%) and shifted partially to the class “Polygonal tundra complexes with up to 20 % surface water” and “Polygonal tundra complexes with up to 10 % surface water”, suggesting enhanced drainage possibly associated with ice wedge degradation or drying due to general warming. Some other classes experienced minor decreases, such as “dwarf shrub-herb communities” representative for the dryer vegetation communities on the second terrace (-1%), while some other classes increased in area, such as “dry grass to wet sedge complex” (+1%) and “wet sedge complex” (+0.7%). Overall, LC trends between the four observation periods from 2000 to 2020 were rather subtle and continuous between neighboring classes. Abrupt changes were identified only at local scales, where for example the drainage of some larger lakes caused abrupt class shifts or where shore erosion of ice-rich bluffs along delta channels caused large but fairly gradual class shifts. In comparison to the overall LC dynamics and LC changes, these abrupt changes play only a minor role in the change of LC class areas so far in the Arctic Lena Delta.
Overall, the dynamic LC remote sensing approach provides a first continuous 20-year observation of LC classes and their shifts in an Arctic delta and proves valuable for assessing LC changes. Attribution of methane observational data to individual classes and a quantification of changes in methane fluxes is work in progress and will be presented at the time of the conference.
References:
Schneider J, Grosse G, Wagner D (2009): Land cover classification of tundra environments in the Arctic Lena Delta based on Landsat 7 ETM+ data and its application for upscaling of methane emissions. Remote Sensing of Environment, 113: 380-391. doi: 10.1016/j.rse.2008.10.013.
Juhls, B., Antonova, S., Angelopoulos, M., Bobrov, N., Langer, M., Maksimov, G., ... & Overduin, P. P. (2021). Serpentine (floating) ice channels and their interaction with riverbed permafrost in the Lena River Delta, Russia. Frontiers in Earth Science, 9,.https://doi.org/10.3389/feart.2021.689941
Olefeldt, D., Hovemyr, M., Kuhn, M. A., Bastviken, D., Bohn, T. J., Connolly, J., Crill, P., Euskirchen, E. S., Finkelstein, S. A., Genet, H., Grosse, G., Harris, L. I., Heffernan, L., Helbig, M., Hugelius, G., Hutchins, R., Juutinen, S., Lara, M. J., Malhotra, A., Manies, K., McGuire, A. D., Natali, S. M., O'Donnell, J. A., Parmentier, F.-J. W., Räsänen, A., Schädel, C., Sonnentag, O., Strack, M., Tank, S. E., Treat, C., Varner, R. K., Virtanen, T., Warren, R. K., and Watts, J. D.: The Boreal–Arctic Wetland and Lake Dataset (BAWLD), Earth Syst. Sci. Data, 13, 5127–5149, https://doi.org/10.5194/essd-13-5127-2021, 2021.
The Spanish National Institute of Aerospace Technology (INTA) acquired in 2019 the high-resolution Chlorophyll Fluorescence sensor (CFL) to be part of the European scientific community involved in the retrieval of the solar-induced chlorophyll fluorescence (SIF) using remote sensing techniques. The INTA’s Airborne Hyperspectral System, which has actively participated in airborne hyperspectral campaigns since 1995 with the already existing Airborne Hyperspectral Scanner (AHS) and Compact Airborne Spectrographic Imager (CASI 1500i), has been notably improved with the incorporation of the CFL.
The CFL is one of the newest hyperspectral sensors of the HYPERSPEC® family from Headwall Photonics Inc. It is a pushbroom sensor with an angular field of view of 23.5° and 1600 across-track spatial pixels. CFL collects image data across the SIF emission spectrum from 670 nm to 780 nm. The spectral design takes a very narrow passband with up to 2160 pixels to ensure a spectral resolution under 0.2 nm of FWHM. The spatial and spectral binning possibilities can be up to a reduction of 4 for the number of pixels.
The radiometric and spectral characterization as well as the calibration of the CFL sensor are periodically performed at INTA’s facilities. The CFL processing chain, which is mainly developed in R programming language, is being continuously updated to generate L1 (georeferenced at-sensor radiance) and L2 products (georeferenced ground reflectance and top of canopy fluorescence). Imagery orthorectification and atmospheric correction are performed by an in house set of toolboxes based on Applanix data on board the INTA’s aircraft and the libRadtran radiative transfer code respectively. The spectral fitting method is used to for the retrieval of SIF.
With the acquisition of the CFL, the INTA’s Airborne Hyperspectral System is now suitable for projects related to the SIF retrieval and the upcoming ESA Earth Explorer FLEX mission. The capability of the system has been demonstrated by participating in the L3 and L4 advanced products for the FLEX-S3 mission project (FLEXL3L4 project, Spanish State Plan for Scientific Research and Innovation), where INTA leads the calibration and validation (CalVal) part of this project. In the framework of the FLEXL3L4 project, a ground-based and airborne campaign was carried out in the experimental agricultural site of Las Tiesas, Barrax, Spain. Multiscale comprehensive observations of radiometric (mainly fluorescence) and biophysical parameters of several crops along Las Tiesas Experimental site were acquired in two hyperspectral flight campaigns. Additionally, during the airborne overpass simultaneous top of canopy (FLOX, Piccolo system, ASD) and leaf level (Fluowat) measurements were performed over different land use types.
The geometric and radiometric performance of the CFL L1 and L2 products has been evaluated for the first time using the in-situ measurements from the described field campaign. Furthermore, a first capability assessment of the CFL sensor for FLEX CalVal is reported.
Solar-induced fluorescence (SIF) is known to correlate with gross primary productivity (GPP) (Frankenberg et al., 2011; Guanter et al., 2012; Sun et al., 2018) Although this correlation is not linear (Dechant et al., 2020), it might be used to enhance the accuracy of global carbon cycle assessments and thus to improve currently available dynamic vegetation models. Remotely-sensed SIF has been used to assess temporal dynamics of photosynthesis across different biomes (Köhler et al., 2018; Magney et al., 2020; Walther et al., 2016), but its application is especially useful for evergreen-dominated ecosystems. In ecosystems like Boreal or Mediterranean forests, the applicability of conventional reflectance-based indices is strongly limited (Garbulsky et al., 2011) due prevalence of evergreen vegetation (Magney et al., 2019). Although SIF carries a potential to enhance our capacity to follow photosynthetic dynamics of evergreen forests and has been widely implemented to do so, interpretation of SIF in terms of gross GPP remains challenging. That is because of insufficient knowledge of what mechanisms (and how) underlie the spatial and temporal variation of SIF.
Understanding of how biochemical, morphological, structural, and photosynthetic factors affect the SIF – photosynthetic dynamics relationship is essential to interpret SIF in terms of GPP (Porcar-Castell et al., 2021). However, because these factors vary in space and time, investigating their effect on SIF is difficult. Fortunately, this knowledge gap can be conveniently addressed at leaf-level. Leaf-level enables retrieving a full-range, continuous chlorophyll fluorescence (ChlF) spectrum of (approx.) 650–850 nm (Lichtenthaler & Rinderle, 1988), in contrast to narrow Fraunhofer absorption bands in which SIF is retrieved (Meroni et al., 2009). Consequently, an effect of factors such as chlorophyll content (Buschmann, 2007), which influence not only ChlF level but also ChlF spectral shape, can be investigated. Moreover, the ChlF – photosynthesis relationship is easier to interpret at leaf-level, because it is not complicated by structural factors, like the canopy architecture (Kim et al., 2021). Consequently, interpretation of SIF in terms of photosynthetic dynamics at larger scales depends on our understanding of how various factors affect the ChlF – photosynthesis relationship at leaf-level (Magney et al., 2020; Raczka et al., 2019).
We investigated how biochemical, morphological, and photosynthetic factors affect leaf-level ChlF: its magnitude and spectral shape, across leaves of different species and in response to the different growing light environments. We measured leaf-level chlorophyll fluorescence at full-range spectrum, simultaneously with chlorophyll and carotenoids content, specific leaf area, or photochemical and non-photochemical quenching, for unstressed leaves of 20 species characteristic to Boreal and Mediterranean ecosystems. Data were acquired during three measuring campaigns in Finland (boreal forest in 2017, Helsinki city in 2019), and in Spain (2018). Importantly, the majority of species were sampled from two canopy heights, representing contrasting light environments. The location-specific light environments were estimated using Digital Hemispherical photography (Hemisfer®, WLS, Birmensdorf, Switzerland; Rajewicz et al., 2022, in review).
Results of our study imply that the relationship between ChlF and photosynthesis is affected by biochemical, morphological, and physiological factors that vary between species, light environments, and biomes. Interestingly, these factors might be dependent on the light environment in a different manner or to a different extent when comparing Boreal and Mediterranean ecosystems. Therefore, we suggest that background information on biochemical and morphological differences between leaves within a single ecosystem, or between ecosystems, might enhance interpretation of ChlF in terms of photosynthesis. That enhancement has, in turn, a potential to support the more accurate interpretation of SIF in terms of GPP dynamics and thus might have important implications from the perspective of current and future carbon cycle studies.
References:
Buschmann, C. (2007). Variability and application of the chlorophyll fluorescence emission ratio red/far-red of leaves. Photosynthesis Research, 92(2), 261–271.
Dechant, B., Ryu, Y., Badgley, G., Zeng, Y., Berry, J. A., Zhang, Y., Goulas, Y., Li, Z., Zhang, Q., & Kang, M. (2020). Canopy structure explains the relationship between photosynthesis and sun-induced chlorophyll fluorescence in crops. Remote Sensing of Environment, 241(Journal Article), 111733.
Frankenberg, C., Fisher, J. B., Worden, J., Badgley, G., Saatchi, S. S., Lee, J., Toon, G. C., Butz, A., Jung, M., & Kuze, A. (2011). New global observations of the terrestrial carbon cycle from GOSAT: Patterns of plant fluorescence with gross primary productivity. Geophysical Research Letters, 38(17).
Garbulsky, M. F., Peñuelas, J., Gamon, J., Inoue, Y., & Filella, I. (2011). The photochemical reflectance index (PRI) and the remote sensing of leaf, canopy and ecosystem radiation use efficiencies: A review and meta-analysis. Remote Sensing of Environment, 115(2), 281–297.
Guanter, L., Frankenberg, C., Dudhia, A., Lewis, P. E., Gómez-Dans, J., Kuze, A., Suto, H., & Grainger, R. G. (2012). Retrieval and global assessment of terrestrial chlorophyll fluorescence from GOSAT space measurements. Remote Sensing of Environment, 121(Journal Article), 236–251.
Köhler, P., Frankenberg, C., Magney, T. S., Guanter, L., Joiner, J., & Landgraf, J. (2018). Global retrievals of solar-induced chlorophyll fluorescence with TROPOMI: First results and intersensor comparison to OCO-2. Geophysical Research Letters, 45(19), 10,456-10,463.
Kim, J., Ryu, Y., Dechant, B., Lee, H., Kim, H. S., Kornfeld, A., & Berry, J. A. (2021). Solar-induced chlorophyll fluorescence is non-linearly related to canopy photosynthesis in a temperate evergreen needleleaf forest during the fall transition. Remote Sensing of Environment, 258(Journal Article), 112362.
Lichtenthaler, H. K., & Rinderle, U. (1988). The role of chlorophyll fluorescence in the detection of stress conditions in plants. CRC Critical Reviews in Analytical Chemistry, 19(sup1), S29–S85.
Magney, T. S., Barnes, M. L., & Yang, X. (2020). On the covariation of chlorophyll fluorescence and photosynthesis across scales. Geophysical Research Letters, 47(23), e2020GL091098.
Magney, T. S., Bowling, D. R., Logan, B. A., Grossmann, K., Stutz, J., Blanken, P. D., Burns, S. P., Cheng, R., Garcia, M. A., & Kӧhler, P. (2019). Mechanistic evidence for tracking the seasonality of photosynthesis with solar-induced fluorescence. Proceedings of the National Academy of Sciences, 116(24), 11640–11645.
Meroni, M., Rossini, M., Guanter, L., Alonso, L., Rascher, U., Colombo, R., & Moreno, J. (2009). Remote sensing of solar-induced chlorophyll fluorescence: Review of methods and applications. Remote Sensing of Environment, 113(10), 2037–2051.
Porcar-Castell, A., Malenovský, Z., Magney, T., Van Wittenberghe, S., Fernández-Marín, B., Maignan, F., Zhang, Y., Maseyk, K., Atherton, J., & Albert, L. P. (2021). Chlorophyll a fluorescence illuminates a path connecting plant molecular biology to Earth-system science. Nature Plants, 7(8), 998–1009.
Raczka, B., Porcar-Castell, A., Magney, T., Lee, J. E., Köhler, P., Frankenberg, C., Grossmann, K., Logan, B. A., Stutz, J., & Blanken, P. D. (2019). Sustained nonphotochemical quenching shapes the seasonal pattern of solar-induced fluorescence at a high-elevation evergreen forest. Journal of Geophysical Research: Biogeosciences, 124(7), 2005–2020.
Sun, Y., Frankenberg, C., Jung, M., Joiner, J., Guanter, L., Köhler, P., & Magney, T. (2018). Overview of Solar-Induced chlorophyll Fluorescence (SIF) from the Orbiting Carbon Observatory-2: Retrieval, cross-mission comparison, and global monitoring for GPP. Remote Sensing of Environment, 209(Journal Article), 808–823.
Walther, S., Voigt, M., Thum, T., Gonsamo, A., Zhang, Y., Köhler, P., Jung, M., Varlagin, A., & Guanter, L. (2016). Satellite chlorophyll fluorescence measurements reveal large‐scale decoupling of photosynthesis and greenness dynamics in boreal evergreen forests. Global Change Biology, 22(9), 2979–2996.
All life on earth depends on the availability of water. Climate change and wasting customs threat to limit its access to a large part of the population. Inefficient water management systems makes agriculture one of the activities that contribute most to such an alarming situation. Thus, the need of new ideas for a better efficiency in the use of water constantly grows, which implies the use of remote sensing (RS) techniques to cover large areas. Reflectance-based RS products, such as vegetation indices, have shown low sensitivity to detect the effects of water limitation on vegetation before the stress has impacted canopy structural properties. Thermal information is more closely related to water stress in plants, but is also affected by other factors not related to soil water limitations, e.g. wind speed and humidity. Recently, the use of sun-induced chlorophyll fluorescence (SIF) for water stress assessments has gained interest, since it is directly related to the photosynthetic activity that dynamically responds to limitations in the availability of water. Nevertheless, it is not clear yet how the spatial relation between SIF and soil water content behaves according to specific vegetation and soil characteristics. Therefore, in the present study we analyzed the link between airborne-SIF and geophysics-based plant available water (PAW) in the root zone of three crops (winter wheat, summer non-irrigated sugar beet and irrigated potato) during three growing seasons (2018, 2019 and 2020). We found a strong positive correlation (r = 0.92; p < 0.01) when water was a limiting factor, i.e., in the non-irrigated summer crop (sugar beet). The relation disappeared when the level of PAW is sufficient to meet the crops water need, i.e. in irrigated crops or years with precipitation events (25 l m-2) accumulated a few days before data acquisition. An unclear pattern in the relation of winter wheat and PAW might be explained to the advanced growth stage of winter wheat (ripening), when variations on SIF might be influenced by other physiological processes like chlorophyll degradation rather than the PAW in the root zone. Moreover, an expected response of SIF to a low PAW zone in the spatial and the temporal domains compared with the enhanced vegetation index (EVI) and the surface temperature, respectively, is reported in our study for the first time. The presented results contribute to the development of new methodologies for a better efficiency in the use of water by providing new insights on the role of SIF for real-time assessment of crop water stress. Besides, the current availability of global SIF and soil moisture satellite datasets such as the TROPOspheric Monitoring Instrument (TROPOMI)-SIF and the Soil Moisture Active/Passive (SMAP) products, respectively, enables further analysis to improve our understanding of the SIF-soil water content relation on larger scales. A brief insight on this relation will be presented on the example of the European heat wave in summer 2018. For this event the relationship between SIF and soil moisture for forests was characterized by high soil water content and low SIF values while crop lands showed an opposite trend.
Solar-induced chlorophyll fluorescence (SIF) is an optical signal that can track plant functional status under natural illumination conditions. Because SIF competes with photochemical and non-photochemical energy dissipation processes, it can reflect the dynamic regulation of photosynthesis in the field. SIF retrieval can be achieved benefiting from the development of high spectral resolution spectrometers and the use of solar or telluric atmospheric absorption features.
Although SIF has been measured from leaf to landscape scale using a variety of instruments and platforms (i.e. towers, drones, aircrafts, and satellites), attempts of employing these data for the fluorescence scaling from leaf to canopy are still under investigation. The fluorescence emitted at leaf level differs from the at sensor fluorescence due to the atmospheric and canopy effects. Furthermore, the reliable retrieval of SIF is also challenging as the SIF signal is mixed with reflected radiance from plants and only contributes 0.5 - 5% to apparent reflectance. Thus, validating and evaluating the quality of SIF and connecting SIF across scales is necessary, especially for satellite products e.g. from the planned Fluorescence Explorer (FLEX) mission.
In the validation process, understanding the propagation of SIF from leaves to top-of-canopy is one of the most important steps. In an attempt to close this gap, we developed the HyScreen, a ground-based line-scan hyperspectral imaging system, to measure SIF and vegetation indices at canopy scale with high spatial resolution, reaching 1-1.5 mm when placed 1 meter above canopy and allowing to differentiate between individual leaves, which brings a unique opportunity to characterize the vegetation structure, for instance to discriminate between shaded and sunlit leaves.
HyScreen consists of two sensors, the FLUO module (FWHM of 0.36 - 0.41 nm) to measure SIF and the VNIR module (FWHM 2.4 - 4.4 nm) to calculate reflectance as well as vegetation indices. Regarding the HyScreen data processing, an in-house processing chain was developed to retrieve SIF (in the O2A and O2B bands) as well as vegetation indices. It includes the radiometric and spectral characterization of both FLUO and VNIR modules as well as the determination of top-of-canopy upwelling and downwelling radiance.
In this study, to evaluate the performance of the HyScreen system, fluorescent (banana leaf and weeping fig leaves) and non-fluorescent targets (soil, peat and reference panels) were measured under clear sky conditions. Additionally, two genotypes of soybean plants with different chlorophyll content were measured to investigate the system performance when retrieving fluorescence from a complex structure. Non-fluorescent targets showed fluorescence values close to zero while SIF of sunlit vegetation targets ranged from 1.96 - 4.62 mWm⁻²nm⁻¹sr⁻¹ at O₂A and 1.13 - 4.24 mWm⁻²nm⁻¹sr⁻¹ at O₂B. Additionally, the retrieved fluorescence of the soybean variety characterized by lower chlorophyll content (‘Minngold’ with 1.19 and 1.42 mWm⁻²nm⁻¹sr⁻¹ at O₂A and O₂B ) provided higher SIF values than the darker variety (‘Eiko’ with 0.49 and 0.76 mWm⁻²nm⁻¹sr⁻¹ at O₂A and O₂B ). Furthermore, for both soybean varieties, SIF values of sunlit leaves were higher compared to shaded leaves, Minngold with 0.21 and 1.40 mWm⁻²nm⁻¹sr⁻¹ difference at O₂A and O₂B respectively; and Eiko with 0.21 and 1.71 mWm⁻²nm⁻¹sr⁻¹ difference at O₂A and O₂B respectively. At the same time, soybeans with different chlorophyll content have different SIF ratios (O₂A SIF divided by O₂B SIF) (0.83 and 0.65 for ‘Minngold’ and ‘Eiko’, respectively).
In conclusion, the HyScreen system is a proximal measurement system, which allows to measure SIF and vegetation indices at a small distance above the canopy. It is capable of capturing spatial heterogeneity and structural parameters of a single plant. Therefore, HyScreen retrieved SIF and Vegetation indices can be used to investigate the influence of canopy structure on canopy level SIF measurements. Moreover, these data have the potential to be used as ground validation data for larger scale SIF products recorded by drones or aircrafts.
Agriculture has to guarantee food security for a constantly growing population by increasing crop productivity with minimized environmental impact. Remote Sensing (RS) for large scale vegetation assessment is one of the most important tools to overcome this challenge. For years the implementation of RS techniques for crop assessment has been mainly based on the use of reflectance-based information, e.g. Vegetation Indices (VIs), which indicate crop stress after its effect has impacted plant structural properties. It is suggested that the use of Sun-induced Chlorophyll Fluorescence (SIF) possibly allows earlier crop stress detection, since being in direct relation with photosynthetic activity, thus, making it possible to detect smooth (pre-visual) changes in the functioning of vegetation. RS of SIF has gained interest of researchers thanks to the recent development of algorithms and models to compute SIF from airborne and satellite sensors. The FLuorescence EXplorer (FLEX) satellite mission of the European Space Agency (ESA) will provide SIF data at global scale with a spatial resolution of 300 m. Despite the great value of such data to track large-scale vegetation functional dynamics, there is high interest to study possible ways to increase its resolution to an intra- or inter-field level. Recent studies have addressed that subject using VIs, evapotranspiration and land surface temperature as explanatory variables. Yet, a more flexible method capable to work in multiple ecosystems and spatiotemporal scales is needed. Our hypothesis is that the versatility of the fractal geometry, present in numerous spatial and temporal phenomena in nature, allows fractal approaches to address that need. With this study, we aim to first evaluate the existence of fractal geometry in the spatial distribution of SIF emitting objects based on the presence of the universal Power Law (PL) and, second, to evaluate whether the aggregation of the SIF signal in SIF emitting objects across spatial resolutions is scale invariant. For that purpose we used airborne SIF data retrieved over a ~60 ha soybean field in Nebraska, USA (summer 2018). The image was resampled from its original resolution of 1.5 m to 5, 10 and 15 m pixel size. The resampled images were segmented into individual objects, and for each object the total SIF (SIFTOT) was calculated. We found: (i) presence of fractal geometry in the distribution of SIFTOT objects, since they followed the PL in all the analyzed scales; and (ii) evidence of scale invariance in the SIF aggregated signal. The second was concluded based on the linear increase of the scale factor and the nearly invariant behavior of the dimension factor of the PL equations across spatial resolutions. Both findings constitute the first step towards the use of the fractal geometry for SIF-downscaling, understood as the fragmentation of coarse resolution SIF data into the SIFTOT of individual vegetation objects under its footprint. The above described study was accepted for publication as the ‘fractal geometry’ chapter in the Springer-Nature Encyclopedia of Mathematical Geosciences, and it was ‘in production’ status by the time of this abstract’s submission. Additionally, we investigated possible bi-variate PL’s where a second variable could explain variations in SIFTOT. Interestingly, we found in numerous datasets that the inverse of the (SIF emitting) object size fits the PL function with SIFTOT at R2 > 0.95. This finding opens the possibility for practical SIF-downscaling approaches using the fractal theory.
As a fundamental life process, Photosynthesis plays a crucial role not only for food security, but also in water, energy and carbon exchanges between the land and the atmosphere. Due to its direct link to the photosynthetic light reactions, sun-induced fluorescence is often proposed as one of the most promising remote sensing signals to monitor photosynthesis in space in time.
However, uncertainties remain about its ability to capture the downregulation of photosynthesis under drought or temperature stress conditions. These uncertainties are mainly related to co-occurring morphological (e.g. leaf angle, leaf folding) and phenological (e.g. change in leaf pigments) changes which affect the optical signal received by the sensor. While fluorescence in the far-red spectra (F760) is mainly affected by scattering effects, fluorescence in the red spectra (F687) is affected by reabsorption effects. For the differentiation of morphological/phenological and physiological effects it is therefore essential to understand these processes and their influence on red and far-red fluorescence under stressed conditions.
We will present results of an mesocosm water manipulation experiment conducted before and during the first heat wave of 2019 (June to July) in Antwerp, Belgium. In five out of 15 mesocosm a drought was induced to Solanum tuberosum (potatoes) plants. Under clear sky conditions, we conducted nearly simultaneous measurements of canopy and leaf F687 and F760 with the Hyper spectrometer FLOX (JB-Hyperspectral Devices GmbH, Düsseldorf, Germany) and the leaf Clip FLUOWAT. We analysed the relationship of leaf and canopy measurements of F687 and F760 as well as red and far-red fluorescence yields (FY687 and FY760 respectively) under increasing drought and heat stress. By rotating the mesocosm in 90° steps, we simulated a change in the solar incident angle and analysed these effects on F687 and F760 and FY687 and FY760.
Our measurements show a positive relationship between leaf and canopy values of F687 and F760, as to be expected. However, when normalizing these values by APAR to derive fluorescence yields, the relationship between leaf and canopy measurements only holds for FY687. We discuss the effect of changing solar incidence angle, explore possible explanations for the poor relationship between leaf and canopy FY760 and analyse the capability of existing correction methods to address the possible scattering effect of F760 and FY760.
The Earth system science and Earth Observation communities are showing great interest in estimating SIF retrieval from space. Consequently, ESA has approved the FLEX (an Earth Explorer to observe vegetation fluorescence) mission, to be launched into space in 2024. FLEX it is the first mission concept specifically dedicated to monitor the ‘respiration’ of terrestrial vegetation. However, these space-based observations need to be validated and vegetation fluorescence better understood to be of societal benefit. This understanding requires measurement of SIF to be made near to the ground measurements and in the temporal and spatial domains. This will require that measurement from multiple instruments and multiple platforms be directly compared and analysed. Due to the extremely small SIF signal being considered, all instrument systems being used will require very accurate calibration and characterization to minimise uncertainties associated with each, as well as robust, accurate and replicable validation of calibration in the field after instrument deployment.
A number of approaches have been proposed to validate laboratory radiometric calibration in the field, but have a number of disadvantages: some are ‘open path’ or uncooled and needed to be assembled and disassembled for transport which may have led to high levels of uncertainty, are not practical for field use (difficult manipulation and without independent power supply). Some other cannot be used for validation of spectroradiometers from different platforms such as flux towers or on UAVs during field campaigns, or are suitable only for specific fore optic designs. Furthermore, none of these systems is designed to validate both radiance and irradiance calibration and is critical for modern dual-field-of-view spectrometers.
Here, we will provide a brief overview of a newly designed portable in-field calibration validation (cVal) system. The system is configured with a radiometric validation module (cValRad) with thermal control assembly, a spectral validation module (cValSpec) providing uniform emissivity from multiple spectral calibration lamps, portable power bank source that provide the system powering for at least 8 hours (less that the time requested for a validation test in the field) and control, monitoring and acquisition system. Since validation is related to the degree of reproducibility in instrument response, all components included in the validation system have been characterised, calibrated and validated in laboratory using high accuracy spectral and radiometric laboratory standards from CETAL, and their capabilities presented here. Thus, the validation system developed here can provide significant value on the validation of tested spectrometer systems used in field.
Uncertainties related to the FLEX Earth Explorer space observations, which will measure the canopy solar-induced chlorophyll fluorescence (SIF) of various vegetation types, can be assessed not only through the field and airborne validation activities but also through a dedicated computer modelling using modern, physically based radiative transfer models (RTMs). RTMs are highly efficient in evaluating the SIF confounding factors that cannot be directly measured in the field (e.g., impacts of forest woody components) and in revealing their importance in spatial three-dimensional (3D) as well as temporal (diurnal to seasonal) contexts. In this work, we used 3D Discrete Anisotropic Radiative Transfer (DART) model, to analyze canopy structural impacts of the three morphologically contrasting forest types, specifically European beech (Fagus sylvatica), white peppermint (Eucalyptus pulchella) and Norway spruce (Picea abies) stands, on their top-of-canopy (TOC) SIF emissions. While the beech canopy was tall (height of c. 25 m), broadleaf and characterized by the planophile leaf angle distribution (LAD), the peppermint and spruce canopies were middle-sized (height of c. 15 m), narrow-/needle-leaf, with the erectophile and spherical LADs, respectively. 3D DART representations of the stands were created from terrestrial laser scans (TLS) of individual trees of respective species. Each stand had a canopy cover of around 80% and was simulated for three leaf area index (LAI) classes: low (4-5), medium (7-8), and high (10-11). To ensure full comparability of the modelled results, all forest scenarios shared the same field-measured wood/bark and ground optical properties, the same local-noon solar zenith and azimuth angles, and the same atmospheric composition. Leaf optical properties (including SIF emissions) were simulated with the Fluspect-Cx RTM for the constant fluorescence quantum efficiency (fqe) of 0.02305. DART was set to produce the TOC red (686 nm) and far-red (740 nm) SIF signals (bandwidth of 0.0013 nm) together with 3D SIF radiative budgets (RB) of the two SIF bands, allowing for spatial quantifications of the SIF balance (emitted – absorbed SIF) and the omnidirectional SIF escape factor (SIF balance/emitted SIF) within individual 20 cm thick vertical canopy layers. 3D RB was simulated also for a broad spectral band between 400 and 750 nm, used to calculate the fraction of photosynthetically active radiation absorbed by green canopy foliar elements (fAPARgreen) for canopy vertical profiles.
Results revealed that the red SIF of all three species and LAI settings was strongly driven by LAD functions. Erectophile foliage of the peppermint canopies allowed for a higher red SIF scattering and reabsorption, resulting in the lowest red TOC SIF signal. The narrow needle-leaf shape and shoot structure of spruce foliage caused the lowest TOC far-red SIF values across all species and LAI categories. Virtual removal of woody elements (trunks, branches, and twigs) from the DART simulations enabled us to compute the impact of wood shadowing on fAPARgreen, and the wood interactions/obstructions of both red and far-red SIF photons. The largest wood-triggered fAPARgreen decrease was found for spruce stands (45-55%), whereas the decreases in beech and peppermint canopies were much less prominent (10-25%). Similarly, significant wood obstructions, computed as a relative difference between nadir TOC SIF escape factors from canopies with and without wooden parts, appeared for far-red SIF of spruce stands (SIF decrease by 35-45%). A smaller SIF reducing impact (5-25%) quantified for beech and peppermint stands suggests that wood structures introduce more potential uncertainty into far-red SIF TOC observations for coniferous than for broadleaf trees. Interestingly, we found that wood elements of the two broadleaf species did not obstruct but boosted the TOC red SIF signal by 1-3%. Further examination of the 3D DART SIF balance profiles indicated that this SIF increasing wood/bark effect took place in the top 20% of investigated broadleaf canopies. In addition, we found that SIF is escaping predominantly from the top 50% of all simulated forest stands, with the relative omnidirectional escape factor increasing from 0.1 to 0.5 with the increasing forest height. These results suggest that the forest ground Cal/Val undertakings should focus on the upper halves of monitored canopies. Nevertheless, some local exceptions may occur. For instance, contributions of lower vertical layers up to 0.1 W.mU+207B2.nmU+207B1 were noted when modelling red SIF of beech canopies.
Our results demonstrate that the state-of-the-art radiative transfer modelling is ready to be included in the future FLEX mission Cal/Val activities next to the field and air-/space-borne measurements. The inclusion of RTMs’ inputs as variables of interest would allow us to use RTMs as efficient tools revealing potential uncertainties of FLEX SIF products, especially when not measurable experimentally.
Passive microwave sensors have long been invaluable for atmospheric sounding due to their ability to penetrate clouds by comparison to infrared (IR) sounders whose coverage is limited to clear atmospheres. However, unlike IR technology that already provides hyperspectral sensors for widespread use in atmospheric observations, the majority of existing microwave satellites utilize only a small number of channels, thus limiting the amount of information that can potentially be retrieved about the atmospheric column. This ESA-funded High Spectral Resolution Airborne Microwave Sounder (HiSRAMS) project explores the advantages of novel hyperspectral capabilities in the microwave region, with the goal of demonstrating improvements in retrieval accuracy of temperature and humidity profiles and evaluating the technology potential for deployment in future satellite missions. Hyperspectral microwave measurements have the potential of improving the accuracy of NWP models as well as spectroscopic parameterizations of microwave absorption models.
HiSRAMS is a first-of-a-kind system developed by Omnisys Instruments in collaboration with the National Research Council Canada (NRC) and McGill University. The sounder, capable of measuring horizontally and vertically polarized radiances in the 60 GHz oxygen and 183 GHz water vapour bands at 305 kHz native resolution, exploits polyphase FFT filter bank technology. The system also possesses cross-track scanning capability within a 12 degree range around nadir and zenith, and allows great flexibility in measurement mode selection, including choice of polarization, frequency range, and scanning regime.
This compact airborne prototype has undergone initial flight tests onboard the NRC Convair-580, a research aircraft carrying a suite of atmospheric probes for complementary in-situ and remote sensing measurements. For simplicity, the data collection focussed primarily on sampling in clear air conditions and over lake surfaces in North America. In this presentation, we provide an overview of HiSRAMS specifications and show results of first airborne radiation closure tests against synthetic brightness temperature spectra simulated using in-situ pressure, temperature and humidity data.
Snow and ice properties control physical and biological processes on polar ice sheets and mountainous glaciers, which strongly affect the net solar radiation that regulates melt processes and the associated impacts on sea level rise. The amount of solar radiation absorbed by the surface increases when ice gets darker, which is mainly caused by liquid water and small light-absorbing particles (LAP) such as algae, soot, and dust accumulating on the surface and reducing its brightness. Thus, a quantitative mapping of snow and ice properties on a global scale is of particular importance as it provides a valuable input to climate models and helps to understand underlying processes. A new generation of orbital imaging spectrometers provides the technical prerequisites to achieve this objective and will deliver high-resolution data both on a global scale and daily basis, which requests for independently applicable retrieval algorithms. We present a novel method to retrieve grain size, liquid water content, and LAP mass mixing ratio from spaceborne imaging spectroscopy acquisitions. The methodology relies on accurate simulations of both scattering and absorptive properties of snow and ice and uses a joint retrieval of atmosphere and surface components based on optimal estimation (OE). This inversion technique leverages prior knowledge obtained from simulations by a snow and ice radiative transfer model and enables a rigorous quantification of retrieval uncertainties and posterior error correlation. For this purpose, we exploit statistical relationships between surface reflectance spectra and snow and ice properties to estimate their most probable quantities given the reflectance. To test this new algorithm, we conduct a sensitivity analysis based on simulated top-of-atmosphere radiance spectra using the upcoming EnMAP orbital imaging spectroscopy mission, demonstrating an accurate estimation performance of snow and ice surface properties. A validation experiment using in-situ measurements of glacier algae mass mixing ratio and surface reflectance from the Greenland Ice Sheet gives uncertainties of ±16.4 μg/g(ice) and less than 3%, respectively. Finally, we evaluate the potential of the presented algorithm for a robust global product that maps snow and ice surface properties corrected for latitudinal and topographic biases including a rigorous quantification of uncertainties.
Analysis-Ready Data (ARD) from Hyperspectral Sensors—The Design of the EnMAP L2A Land Product
Martin Bachmann, Kevin Alonso, Emiliano Carmona, Birgit Gerasch, Martin Habermeyer, Stefanie Holzwarth, Harald Krawczyk, Maximilian Langheinrich, David Marshall, Miguel Pato, Nicole Pinnel, Raquel de los Reyes, Mathias Schneider, Peter Schwind and Tobias Storch
German Aerospace Center (DLR), Earth Observation Center (EOC), Germany
Corresponding author: martin.bachmann@dlr.de
Abstract
With the increasing availability of data from research-oriented spaceborne hyperspectral sensors such as EnMAP, DESIS and PRISMA, and in order to prepare for the upcoming global hyperspectral mapping missions CHIME and SBG, the provision of well-characterized analysis-ready hyperspectral data is of increasing interest.
Within this presentation, the design of the EnMAP Level 2A Land product is illustrated, highlighting the necessary processing steps for CEOS Analysis Ready Data for Land (CARD4L) compliant data products. This includes an overview of the design of the metadata and quality layer. The main focus is set on the necessary pre-processing chain, as well as the resulting challenges of these procedures.
The processing of the archived raw L0 data to L1B user products includes the radiometric calibration to Top-of-Atmosphere (TOA) Radiance, an advanced approach for the interpolation of defective pixels, and the correction of non-linearity, straylight and other sensor-related effects. Also, the L1B product is spectrally fully referenced, taking spectral smile into account if required.
Next, for generating the L1C products, the orthorectification also includes the co-registration to a Sentinel-2 global master image, and uses the COPERNICUS DEM (GLO-30). With these design considerations, a high relative geometric consistency between EnMAP and Sentinel-2 data is ensured which enables an easy integration in multi-sensorial time-series.
Finally, the atmospheric correction to L2A products allows for the generation of a “land” product (Bottom-Of-Atmosphere (BOA) reflectance) as well as two “water” products (BOA water leaving reflectance as well as BOA subsurface irradiance reflectance). The L2A water algorithm is based on the Module Inversion Program (MIP) by EOMAP, and the EnMAP Level 2A land processor is based on DLR’s PACO (Python-Based Atmospheric Correction). PACO is a descendant of the well-known ATCOR, and is also implemented as the L2A processor within the DESIS ground segment. Because of this heritage, the advantages and shortcomings are well understood, and the good overall performance is shown in the results of many comparison studies.
Already for the L1B product, the full set of quality-related metadata and quality layers generated by the L1C and L2A processors is provided. This includes per-pixel flags for Land, Water, Cloud, Cloud Shadow, Haze, Cirrus, Snow, and also for Saturation, Artefacts, Interpolation and a per-pixel quality rating. In addition, important quality-related parameters are provided in the metadata, e.g. the percentage of saturated pixels, the scene mean Aerosol Optical Thickness and Water Vapor content, as well as the RMS error of the geolocation based on independent check points.
Thanks to this operational approach, the end user of EnMAP will be provided with ARD products including rich metadata and quality information, which can readily be integrated in analysis workflows, and combined with data from other sensors.
Accurate and thematically detailed map products representing the intra-annual distribution of vegetation cover are crucial for a variety of environmental applications, e.g., for monitoring ecosystem disturbances, productivity or health. Open archives of dense temporal multispectral Landsat and Sentinel-2 data together with powerful processing workflows have significantly advanced the intra-annual analysis of vegetation cover during the past decade. With recent and upcoming scientific (e.g. PRISMA, EnMAP) and operational (e.g. CHIME, SBG) spaceborne imaging spectroscopy missions, multitemporal hyperspectral data will complement these multispectral archives. The high spectral information content is expected to facilitate more detailed and quantitative vegetation analyses. However, a well-founded understanding of the benefits of spaceborne hyperspectral data compared to multispectral satellite data is still missing. Moreover, generalized processing workflows that optimally exploit the rich spectral information content are required for an automated production of vegetation cover maps with regular intra-annual update cycles.
This study presents a processing workflow for generating standardized, intra-annual vegetation fraction maps from EnMAP data. The workflow comprises the development of a combined multi-date and multi-site spectral library, synthetic training data generation from the combined spectral library and subsequent regression-based unmixing for fractional cover estimation based on the synthetic training data. The workflow was tested on simulated EnMAP data derived from AVIRIS-Classic imagery with regional coverage over three study sites in California. Imagery for each site was acquired in spring, summer and fall 2013, thus representing the intra-annual distribution of vegetation cover during the different key phenological phases of a year. The study sites comprised a variety of different natural and semi-natural Mediterranean-type ecoregions with diverse vegetation assemblages and ecotones, i.e. transitions between grasslands, shrublands and woodlands/forests.
Results demonstrated the great use of our regression-based unmixing workflow for producing accurate, intra-annual fraction cover maps for needleleaf trees, broadleaf trees, shrubs, herbaceous vegetation and non-vegetation. Average Mean Absolute Errors (MAE) over all classes were below 13% and class-wise MAE were between 3 and 20%. Compared to discrete classification maps representing the dominant class per pixel, our vegetation cover fraction maps provided a more realistic representation of the ecoregions and ecotones. This particularly applied for areas comprising sparse vegetation cover (e.g., arid shrublands), multiple vegetation assemblages (e.g. open canopy woodlands or mixed forest) or vegetation cover at different successional stages (e.g., recovery on formerly disturbed areas). The use of a combined multi-date and multi-site spectral library enabled generalized unmixing models, i.e., single models per class that were applied across all sites and dates. No loss in map quality was found when compared to site- and date-specific mixing models, indicating the great value of such combined libraries for model generalization. Relative comparison to multispectral data revealed the superiority hyperspectral EnMAP data, particularly for disentangling the fraction cover of different woody-vegetation types.
Our study exploited simulated imagery representative of the hyperspectral EnMAP mission. Due to the generalizing capabilities of our workflow, we are confident that the approach can be similarly applied to forthcoming operational spaceborne imaging spectroscopy data from the CHIME or SBG missions. Given the diversity of vegetation cover within the analyzed ecoregions and ecotones, our study sites depict a representative cross-section of structurally similar natural and semi-natural ecosystems globally. We therefore conclude that our findings provide a vital stepping stone toward wall-to-wall, intra-annual vegetation cover fraction maps from global spaceborne imaging spectroscopy missions.
Remote sensing over coastal and inland waters is entering a new phase with the launch of hyperspectral sensors EnMAP, DESIS and PRISMA and with the upcoming PACE mission. It is expected that by exploiting the hyperspectral data, satellite products can be improved and new algorithms developed, advancing water colour remote sensing in these (mostly) optical complex waters. One important step of data processing is the atmospheric correction (AC) that aims to remove the atmospheric, surface and bottom influences from the signal measured by the sensor at the top-of-atmosphere, apart from other influences (e.g. adjacency effects). The remaining water signal is the main information used as input in the algorithms and that usually is only a small percentage of the total signal measured by the sensor. Thus, the quality of the retrievals strongly depends on a successful AC and on the radiometric stability of the sensors. In preparation to the EnMAP mission, we evaluated the Polymer AC algorithm applied to data from DESIS and PRISMA. Polymer is a spectral matching algorithm in which atmospheric and oceanic signals are obtained simultaneously using the fully available spectrum. It is available as a python package and has been largely applied to ocean colour sensors. In this presentation, we will show first results on Polymer AC applied to Level 1 data of the DESIS, PRISMA and Sentinel-2 MultiSpectral Instrument (S2-MSI) over coastal and inland waters. The Level 2 radiometric and chlorophyll-a (Chl-a) retrievals from the different sensors are intercompared and validated against in situ measurements collected by AERONET-OC stations and field campaigns (Tagus estuary, Lake Constance). First results of Polymer applied to DESIS data at different study regions show similar spatial distribution of Chl-a to S2-MSI.
The quality and ecological status of Inland and coastal waters (ICWs) is a key worldwide issue because of the multiple and conflicting pressures from anthropogenic perturbation and environmental change. Timely monitoring of ICWs is therefore necessary to enhance our understanding of their functions, the drivers impacting on them, and to deliver effective management.
Earth Observation (EO) may be used for acquiring timely, frequent synoptic information from local to global scales of ICWs. EO data have been successfully applied for mapping waterbodies for decades, even though the current satellite radiometers are designed for observing the global ocean (e.g. Sentinel-3 OLCI), or land surface (e.g. Sentinel-2 MSI, Landsat 8 OLI, Landsat 9 OLI-2) and not specifically suited for observing processes and phenomena occurring in ICWs. These aquatic ecosystems can be a mixture of optically shallow and optically deep waters, with gradients of clear to turbid and oligotrophic to hypertrophic productive waters and varying bottom visibility with and without the presence of aquatic vegetation (floating or submerged). Deriving ICW quality products from the existing sensors thus remains challenging, due to their optical complexity, as well as the spatial and temporal resolution of the imagery.
PRISMA (PRecursore IperSpettrale della Missione Applicativa), the new hyperspectral satellite sensor of Italian Space Agency (ASI) in orbit since March 2019, provides data with high spectral resolution and good radiometric sensitivity able to resolve small changes in the signal relative to the noise of the sensor and the atmosphere (i.e., high radiometric resolution and high signal to noise ratio). Moreover, the spatial and spectral resolutions of the PRISMA are well suited for the retrieval of multiple biophysical variables, such as optically active water constituents (chlorophyll, suspended and coloured dissolved organic matter) and phycocyanin. The PANDA-WATER project (PRISMA Products AND Applications for inland and coastal WATER), funded by ASI, aims to demonstrate the capabilities of PRISMA hyperspectral imagery to measuring ICWs and to evaluate its suitability and gaps to address inland and coastal ecosystem science and management challenges.
The overall objective of PANDA-WATER is to provide a set of innovative and validated products, derived from imaging spectrometry, that enables the retrieval of additional variables of interest for inland and coastal ecosystems. The novelty of the PANDA-WATER products will stem from the application of the state-of-the-art-algorithms adapted to the ICWs to the PRISMA increased spatial, spectral and radiometric resolution, thus resulting in augmented observational capabilities and lower associated uncertainties compared to the current Copernicus missions. These products will range from more accurate estimates of optically active water constituents, to more sophisticated products such as particle size distributions or distinguishing sources of suspended and coloured dissolved matter, water depth, natural or artificial materials floating over the surface, attenuation coefficient and euphotic depth, the presence of cyanobacteria and harmful algal bloom.
The product development carried out within PANDA-WATER from PRISMA data will also be suitable for the upcoming hyperspectral missions (i.e. DLR EnMap, Copernicus CHIME, NASA’s PACE and SBG), thus contributing to the global advance in spaceborne imaging spectrometry.
The Italian PRISMA (PRecursore IperSpettrale della Missione Applicativa) satellite mission was launched in March 2019 and is acquiring images on demand over the world. PRISMA is the only operative satellite acquiring hyperspectral data in the spectral range between 402 and 2496 nm (30m/pixel) with 234 bands and a panchromatic camera (5m/pixel). Different countries, even though with different stages of development, have in the plan or are close to launching similar payloads, such as the Environmental Mapping and Analysis Program (EnMAP) from DLR, the Surface Biology, and Geology (SBG) from NASA-USGS, and the CHIME mission of ESA. Hyperspectral data collected at VNIR – SWIR wavelengths have been widely reported in the literature for mapping geological outcrops and mineral absorption features occurring within transition metals (i.e., Fe, Mn, Cu, Ni, Cr, etc.) and alteration minerals that display absorption features associated with Mg-OH and Al-OH bonds.
In this preliminary research study, PRISMA potential for geological outcrop mapping was tested using a spectral-based methodology. The steps followed for the PRISMA imagery were: (1) PRISMA L2D (georeferenced reflectance data version 2.0.5.) were requested and downloaded from the ASI PRISMA website; (2) vegetation and water masking, (3) assessing and defining the main absorption band depths of the geological outcrops’ absorption features in unmasked pixels of the study area, (4) application of spectral classification techniques, i.e., the Continuum Removal Band depth (CRBD) and the Support Vector Machine (SVM).
Field campaigns in the Val di Taro areas and Shadan porphyry gold deposit for collecting ophiolitic and other rock samples were executed in November 2020 and June 2021, respectively. Moreover, laboratory spectroscopy analysis (ASD reflectance data acquisitions), X-ray diffraction (XRD), and scanning electron microscopy (SEM) analyses were performed on the rock samples collected in the study area.
A reasonable agreement between ground spectral measurements, laboratory analysis, and PRISMA data was found, this was also verified by field visual checking on different accessible areas in Val di Taro and by comparing the PRISMA outcrops map with a detailed (1:10000) geology map of the main deposits of the area. On the other hand, the results show a good correlation between PRISMA and the alteration map of the Shadan deposit.
The contribution of this preliminary study to the remote sensing community is to provide a first evaluation of the real PRISMA data quality in terms of radiometric and spectral accuracy for geological and mineral mapping based on suitable ground measurements on a mountain area of the Italian Northern Apennines suitable to provide information to the Italian Space Agency (ASI) and to the interested users on the potential of PRISMA hyperspectral satellite data. However, in this kind of application it is to consider: (a) the complex geometries (e.g., the study area is in a fragmented mountain area) of the acquired surfaces that surely affect the spectral quality (atmospheric correction) and SNR of hyperspectral data; (b) the complex geological outcrops composed of different mineral assemblages that surely makes a challenge their spectral identification and recognition.
The method presented here could result in a decrease of time and effort in the field, because it leads to an effective mapping of geological outcrops, thus facilitating the exploration of the Earth’s surface at different scales on a variety of platforms, but in no way replacing the field geologist. It could represent a valuable help for geological and mineral outcrops selection as an initial survey for geologists and a first step in filling the gap of a satisfactory knowledge of surface geological outcrops including naturally asbestos rocks related exposures. This is also in view of a green energy transition that requires among others cost-effective, socially tolerable, and rapid methods of exploration to map and preserve the existing but also the opportunity to discover new deposits.
Keywords: Hyperspectral satellite data, geological outcrops mapping, asbestos minerals, quartz-carbonate alteration, potassic alteration, propylitic alteration, sericite PRISMA
Olive tree cultivation (Olea europea) and olive oil production have accompanied humankind since immemorial times. Throughout the various civilizations to the present day, olive trees and olive oil have occupied a central role in the agricultural scenery and income of Mediterranean countries and in their commerce with neighboring populations. Globally, olive oil production has tripled in the last 60 years, reaching 3.2mt in 2019/2020, of which almost 90% is produced by Mediterranean countries, whose main producers are Spain and Italy. In Italy, olive oil production has reached 331kt 2019/2020, of which 5% was produced in Tuscany, where olive tree cultivation is one of the main agricultural activities. According to the latest collected data, in 2020 the total cultivated area of olive trees covered approximately 90000 ha, contributing to a total production of 117kt of olives, a production of 15kt of olive oil and a relative value of almost 130 million euros.
In the last 15 years, the total cultivated area of olive trees in Tuscany has decreased by about 6%. Of the actual total surface reported in 2021, 11% of it has been declared as non-productive. These data may highlight a trend of abandonment of olive trees cultivation, which might depend on various factors such as the increasing economic interest in cheaper seed-oils, or eventually the occurrence of adverse climatic conditions which threaten olive production, as happened this year, and demotivate small-holder farmers to invest in olive cultivation. The abandonment of olive trees does not only have an economic drawback for the region, but it also leads to a phytosanitary emergency as unmanaged olive yards might become an outbreak origin for diseases’ propagation.
The Regional Administration has therefore showed up an increasing interest in monitoring the territory land use and in detecting olive trees cultivation abandonment, in order to develop an efficient plan for lands’ requalification and/or reconversion and to deliver an accurate financial mobilization to support local farmers.
To monitor land use and classify crops, remote sensing plays a key role taking advantage of the aerial and satellite imagery available today.
By now, the existing methods used by the Region for land monitoring, which are based on photointerpretation of high-resolution airborne imagery, do not respond accurately to the problem as the available datasets are inaccurate when it comes to defining the realistic number of cultivated olive trees’ areas and even more when identifying abandoned yards. It is of common awareness that olive trees monitoring is complex, as olive plants are plurennial and the management of the crop and the underneath soil cover, might vary from farm to farm. To overcome this challenge, it is fundamental to build up stronger classification models which combine high resolution imagery information with both historical time series and detailed spectral signatures.
Within this framework, PRISMA hyperspectral satellite imagery delivered by ASI is tested as a tool to retrieve and classify olive cultivations and the probability of land abandonment, also in combination with long-term multispectral datasets from other satellite missions and with a high-resolution aircraft dataset to be used for validation.
For the development of the model, we combine the following datasets:
• high resolution airborne visible images acquired in 2019
• high resolution hyperspectral airborne (HySpex sensor) data acquired in 2020
• multispectral reflectance time series obtained from Sentinel-2 from 2019 to 2021
• reflectance spectra obtained by PRISMA hyperspectral images from 2019 to 2021
• multispectral reflectance time series obtained by Landsat 7-5 from 1984 to 2021
Airborne visible images at high resolution (15 cm) are used for visual ground truthing and for the construction of a library of about 200 olive cultivated areas in the study region (Grosseto, Tuscany-IT). Long term reflectance time series from Landsat and Sentinel 2 are used to retrieve the temporal signature including both the phenological variability and the long-term changes associated to gradual land use changes such as land abandonment. Airborne hyperspectral data acquired at the same time of a PRISMA scene are used to investigate the different spectral signatures of individual olive trees and soil under different managements, at a spatial resolution (1.5m VIS-NIR and 3m SWIR) capable of distinguish the two contributes to the spectra.
The classification methods are based on a random forest and a pattern recognition artificial neural network framework, combining both the spectral and the temporal variability in pixel-based mode. Our results provide relevant information on:
1. seasonal trends with distinguished specific patterns for grassy-covered and soil management practices (tillage etc.)
2. a multi-year trend of vegetation growth for abandoned olive trees, under no maintenance nor management
3. olive tree and soil spectral signatures spatial and temporal variability.
Given the PRISMA spatial resolution (30m) that necessarily combines both the olive plants and the soil in the same grid cell, we finally investigate the different types of soil and plants contribution to the average grid cell signature, highlighting the capability and the limitations of PRISMA in the detection of this type of mixed landscape, that are typical also of other agricultural conservative and agroforestry practices.
Hyperspectral imagery has immense potential for the oceanography as it contributes greatly to the understanding of marine ecosystems and provides valuable information about the unique characteristics of different aquatic systems. However, it also brings some significant challenges: high cost of hyperspectral sensors; difficulty to keep a reasonable signal-to-noise ratio for the bottom of atmosphere reflectance over a narrow spectral band; significant amount of data volume leading to the need of high computation resources. Due to these limitations multispectral missions have long been the primary source of optical remotely sensed data.
The Black sea coastal water in vicinity to Danube delta is especially challenging for ocean color remote sensing due to the complex properties of water from different origin: riverine is mixed with marine water, classified as Case 2 and their non-pigmented particle concentration does not covary in a predictable manner with the chlorophyll-a concentration. The standard bio-optical algorithms often fail to describe the complexity of the Black Sea.
For this type of turbid coastal waters, the combination of both multi- and hyperspectral imagery can contribute greatly to understanding the particularity of complex water basins and provide additional insights about the processes on the sea surface. In this study we compare available PRISMA and Sentinel-2 images in order to better identify the surface signature of riverine water in the Black Sea coastal waters near the Danube Delta. We compare the remote sensing reflectance signal from both sensors, analyze the characteristic reflectance of different coastal regions and types of water, and draw conclusions about the benefits of using hyperspectral images.
PRISMA (PRecursore IperSpettrale della Missione Applicativa) is a pre-operational hyperspectral sensor developed by the Italian Space Agency (ASI). Launched on March 2019, the PRISMA mission is mainly devoted to expert users, as scientific researchers, Earth Observation private companies and institutional organizations, interested in algorithm implementation, products and applications development, as well as environmental mapping and monitoring. In the framework of PRISCAV project (Scientific CAL/VAL of PRISMA mission), funded by ASI and started in 2019, ground based and airborne Fiducial Reference Measurements (FRM) simultaneous to PRISMA overpasses over different targets (agriculture, forest, sea, inland and coastal water, snow) were gathered to assess PRISMA radiometric performance.
In this context, an evaluation of remote sensing reflectances (Rrs) derived from PRISMA hyperspectral imager was performed within the visible and infrared range (VNIR) over inland and coastal sites. Sentinel-3 OLCI imagery and above-water in situ reflectance measurements from autonomous hyper- and multispectral radiometer systems were used to evaluate the performance of PRISMA Level-2D (L2D) surface reflectance, a standard product distributed by ASI. PRISMA L2D products were also compared to Rrs data derived from the atmospheric correction tool ACOLITE, adapted for PRISMA processing.
In this study, three optically diverse Italian sites, equipped with fixed positioned autonomous multispectral and hyperspectral radiometer systems, were selected for the comparison: Lake Trasimeno, a shallow and turbid lake in central Italy; the Acqua Alta Oceanographic Tower (AAOT), located 8 nautical miles off the lagoon of Venice in the Adriatic Sea and characterized by clear to moderately sediment dominant waters; and the Oceanographic Observatory (OO), mounted at about 3.3 nautical miles southwest of the island of Lampedusa, where oligotrophic water and stable conditions are present.
At the time of submission, a total of 26 PRISMA images, 30 OLCI L2-Water Full Resolution (WFR) products, and available synchronous in situ measured reflectances were collected for the match-up analysis. Common statistic metrics were used for the quantitative assessment considering each single site and the combined dataset. The results demonstrated the quite good performance of PRISMA over the range of optical properties that characterize the three investigated waterbodies. Overall, ACOLITE Rrs showed lower uncertainties, better correlation and closer spectral similarity with in situ measurements than PRISMA L2D, especially in the central part of the VNIR, between 450 and 600 nm. Compared with PRISMA L2D, ACOLITE outputs were also more consistent with concurrent OLCI L2-WFR data, resulting in significant improvements against the PRISMA standard products in the blue spectral region.
This study, beside to represent a key element of PRISCAV project, will also be relevant for aquatic ecosystem applications with the upcoming spaceborne hyperspectral missions, such as Copernicus Hyperspectral Imaging Mission (CHIME), NASA Surface Biology and Geology (SBG) and Plankton, Aerosol, Cloud, ocean Ecosystem (PACE), DLR Environmental Mapping and Analysis Program (EnMAP), and is involved in the pre-formulation studies of the PRISMA Second Generation (PSG).
Crop simulation models estimate yield gaps and help determine the underlying Genetic – G, Environment – E, and Management – M (G×E×M) factors impacting yield. Crop models are therefore critical components of agricultural monitoring and early warning systems. Earth observation image data track crop growth and development frequently over large areas, so they are increasingly used to drive or improve the models. The data typically consists of a few spectral broadbands over the optical range (400-2500nm). These bands are too coarse to distinguish many important absorption/reflectance features related to crop yield. Hyperspectral image data on the other hand consists of several narrowbands (< 10nm) that are sensitive to these features, but are in short supply. PRISMA is the first hyperspectral Earth observation platform in nearly 20 years and is a precursor to several upcoming missions.
In this study we provide a first assessment of PRISMA narrowbands in estimating end-of-season crop biomass and yield for four important food crops (corn, rice, soybean, wheat) at key stages of crop development (vegetative, reproductive, maturity). Reference data was collected in a field campaign in 2020. It consisted of 60×60m2 survey frames over which dry-weight crop biomass and yield samples were collected. We compared performance against Sentinel-2, which is increasingly used for agricultural monitoring due to its relatively high spatial, spectral (red-edge, near infrared narrowbands), and temporal resolution. The evaluation was performed in two stages. First, we used partial least squares regression (PLSR) to uncover known and unexpected spectral features in the PRISMA data. Second, we used random forest to predict yield with PRISMA and Sentinel-2 data. The PLSR analysis confirmed expected relationships between spectra and crop biomass/yield at the vegetative and reproductive stages. These relationships diminished during maturity when photosynthesis declines. The PLSR analysis also revealed narrowbands in the near infrared had less influence on crop yield estimation than anticipated. We suspect unusual data spikes in the near infrared may have been the cause. The PRISMA and Sentinel-2 random forest models were able to estimate end-of-season biomass (R2=0.67, 0.58) and yield (R2=0.62, 0.59) reasonably well. Predictions were strongest at the vegetative and reproductive stages of development. Shortwave infrared narrowbands and red-edge narrowbands were the most important in the PRISMA and Sentinel-2 models, respectively. PRISMA and Sentinel-2 showed clear complementarity in this study, so future work should explore integrating/fusing these two sources of data. The extent of our study and the sample size was relatively small, so additional campaigns should be carried out to confirm the robustness of our results.
The Italian Space Agency (ASI) satellite mission PRISMA (PRecursore IperSpettrale della Missione Applicativa) provides an important opportunity for the advancement of satellite hyperspectral data exploitation in a variety of scientific, commercial, and environmental applications. Within this framework, the ASI ‘PRISMA SCIENZA’ call for proposals is aimed at fostering the scientific exploitation of PRISMA data and, at the same time, improving the satellite hyperspectral remote sensing know-how.
The HYPERHEALTH project has the main objective of developing a monitoring system for outdoor human activities that leverages information extracted from PRISMA data in conjunction with images from other satellite missions as well as ground-based sensor data. The ultimate goal is assessing the environmentally-induced risks for human health due to humidity, carbon dioxide, other gases or particles (e.g. pollens, allergens), excessive Ultra Violet (UV) radiation. The HYPERHEALTH system is intended to provide useful information for public safety organizations and in general, for the authorities aimed at ensuring protection and safety of citizens. Furthermore, by coupling HYPERHEALTH with the development of ad hoc applications, each individual citizen may be given detailed information about environmental conditions, get safety tips, and be warned of the possible risks connected to outdoor activities.
HYPERHEALTH projects will include the following research activities:
1) Development of novel methods that leverage machine learning to estimate atmospheric constituents from PRISMA data, with specific emphasis on water vapor and carbon dioxide columnar contents.
2) Development of novel methodologies based on machine learning and the forward modeling approach to search, within PRISMA images, surfaces covered by vegetations species that may be allergenic for human beings with the aim of monitoring their flowering status and, in turn, identifying the pollen allergenic risk zones.
3) Analysis and development of techniques to extrapolate albedo data from PRISMA images in order to enable analysis of the UV radiation reflected from the surface.
4) Development of methods to fuse/integrate data and images taken at different spatial/temporal scales exploiting PRISMA data together with data from Sentinel 2, Sentinel 5p, CAMS, SEVIRI –the goal is augmenting the richness of information to (jointly) be extracted from the data.
5) Analysis and testing of suitable methods for validating HYPERHEALTH system performance.
By bringing methodological advancements in the arena of health-driven environment characterization, HYPERHEALTH project will have impact on a variety of fields of interest, such as Air Quality, Natural and Man-Made Hazards, Ecosystem Structure & Composition, Vegetation & Forestry. Furthermore, realization of the HYPERHEALTH project will allow for prompt results exploitation aimed at ensuring citizens’ health, paving the way towards innovative citizen digital services.
Preliminary results that will be available will be illustrated at the conference.
PRISMA (PRecursore IperSpettrale della Missione Applicativa) is a demonstrative spaceborne mission, fully deployed by the Italian Space Agency (ASI). To support the calibration/validation activities of the PRISMA hyperspectral mission, ASI and the National Research Council (CNR) started in 2019 the PRISCAV project (Scientific CAL/VAL of PRISMA mission). The main objective of PRISCAV is the comprehensive characterization of the performances of the PRISMA payload in orbit in different operational scenarios and the verification of the durability in time of the performances.
To this end, PRISCAV created a network of instrumented sites (12) showing different land-use and surface settings (Snow; Sea; Inland and Coastal Water; Forest and Cropland) to obtain independent and traceable in-situ and airborne Fiducial Reference Measurements (FRM) simultaneous to PRISMA acquisitions in order to assess the required performance of sensor, data products, and processors at the different levels (i.e. Top-of-Atmosphere Level 1 Radiances and Bottom-of-Atmosphere Level 2 Reflectance standard products).
To date, over 250 PRISMA acquisitions were collected over the target sites. Ground teams ensured a simultaneous land-use classification and an appropriate atmospheric characterization. This enabled a multiscale spectral matching with ground targets and the assessment of key parameters related to the spectral, spatial and radiometric performances of PRISMA over the mission duration till now, as well as their evolution with the different versions of the processors. The results of the PRISCAV project obtained so far are highly promising, in line with the mission requirements, over the range of surface properties that characterize the investigated sites, confirming the potential of the PRISMA mission for the development of innovative products and new applications in the field of environmental monitoring and earth observation in general.
The rich amount of information contained in hyperspectral satellite images can be exploited to generate a land cover/use classification of the vegetation and, in particular, to determine forest fuel. Thanks to the fact that such classification can be statically mapped to the so called “fuel models” (association between a fuel type and its physical parameters), it is the core of the process of creating of a Forest Fire Fuel Map product. Such maps are of high relevance in developing fire hazard maps, run fire propagation models, compute vulnerability maps and plan fuel removal practices.
In the framework of the ASI (Italian Space Agency) project “Sviluppo di Prodotti Iperspettrali Prototipali Evoluti” (Contract ASI N. 2021-7-I.0), a prototype processor based on PRISMA (PRecursore IperSpettrale della Missione Applicativa) imagery has been developed for forest fire fuel mapping.
Currently there is no training dataset which is detailed enough and accurate to be considered suitable to exploit the spectral information of satellite’s hyperspectral sensors and for enabling their high discrimination capability. For this reason, two different approaches have been proposed to generate such training dataset automatically and to make possible a supervised machine learning approach for forest classification.
The first approach relies on an automatic process for refining existing land cover, such as the Corine Land Cover. At European level, the Corine Land Cover, although based on satellite’s multispectral data, is the most complete land use/land cover classification available and using it as reference is a good starting point to create a dataset for training a Machine Learning (ML) model. However, this kind of layer could be outdated in some parts, and it is necessary to clean the dataset by removing outliers. For this reason, the exploitation of the Corine Land Cover requires an automatic refinement step to detect and remove outliers, based on algorithms of spectral matching and methods of robust model fitting.
The second approach attempts to build a training dataset, starting from few but very reliable ground truths. Indeed, reliable ground truth data about forest types are usually insufficient (few and sparse in the area of interest covered by a hyperspectral image footprint) for training machine learning models, and a procedure for exploiting PRISMA images to increase the dataset size has to be developed. In this respect, a newly proposed methodology has been adopted, based on increasing the dataset with similar pixels for each class exploiting spectral similarity measures.
Once the training dataset is generated, based on one or both of the approaches above, the different ML algorithms (random forest, SVM, etc.) are experimented to design and build the model for forest classification.
All the described processes, that is training set generation, learning process and prediction are executed for each PRISMA data. To this aim, the whole generated training data set is automatically split into two datasets with different ratios, the first used for training the model and the second for testing in order to control the generalization capability of the trained model and the quality of the prediction results.
The obtained forest classification map is used to generate the Fire Fuel Map, by associating each classified pixel to the standard fuel model (Anderson) representing its proneness to fire. Some attributes directly related to the fuel model class are also provided, such as the fuel load for living and dead components, extinction humidity, flame height, and propagation rate.
In order to assess its performances, the developed algorithm has been experimented, with very promising results, on several PRISMA images acquired over Latium and Sardinia (Italy).
The PRISMA (Precursore iperspettrale della missione operativa) satellite mission of the Italian Space Agency (ASI) provides hyperspectral Earth observation data, thanks to sensors in the visible and shortwave infrared (from 400 to 2500 nm) portion of the radiation spectrum (240 bands in total) and a panchromatic camera with higher spatial resolution. The satellite has been operational since 2019, with an expected mission lifetime of 5 years. The PRISMA acquisition strategy is based on user requests and a mission background, providing 30 km x 30 km images with 30 m nominal spatial resolution for the hyperspectral data. Its observational capabilities are near-global and depend on the solar illumination conditions.
Imaging spectroscopy has multiple applications, ranging from agriculture, air quality, mineral exploration and ecosystem monitoring, among many others, and recent works already highlighted the potential of PRISMA data for such purposes, especially in synergy with other missions (such as multispectral and optical missions). In this work we focus on the possibility of measuring key biophysical parameters in coastal and oceanic environments. We discuss a novel approach for the simultaneous retrieval of atmospheric and marine parameters, including atmospheric aerosol properties and ocean water characteristics, including chlorophyll concentration and the presence of sediments.
The methodology relies on an inversion based on a fully coupled ocean-atmosphere radiative transfer model, with the aim of providing output maps at a high spatial resolution, and it attempts to reduce computational costs by testing different spatial and spectral samplings. The procedure is designed for seamless processing of both open ocean waters and optically complex coastal acquisitions, as the high spatial resolution of PRISMA allows it to capture fine scale features. But since the retrieval procedure is computationally demanding, novel algorithmic methods based on non-parametric approaches are discussed. These statistical methods have been already applied with success for the retrieval of land biophysical parameters (such as vegetation properties), obtaining significant computational efficiency compared to more traditional procedures based on full radiative transfer models, still preserving good accuracy and with flexibility for extrapolation.
The PRecursore IperSpettrale della Missione Applicativa (PRISMA) is an Italian hyperspectral satellite mission launched in March 2019. The VNIR-SWIR sensors cover the wavelength range from 400 nm – 2500 nm in ~240 spectral channels with a spectral resolution of ~12 nm and a spatial resolution of 30 m. PRISMA data offer a fast and cost-effective possibility to meet the demand of the industry for efficient prospecting and mineral exploration techniques. In this study, atmospherically corrected and orthorectified L2D VNIR-SWIR PRISMA product data of mineral deposits in the Iberian Pyrite Belt (IPB) in Spain are evaluated regarding their potential to detect mineral composition variations based on their wavelength shift of the mineral-diagnostic absorption feature. Additionally, the capability of the L2D data to identify iron-hydroxides, sulphates, carbonates and phyllosilicates is investigated. Field-based hyperspectral AisaFENIX imaging data of the Los Frailes open pit in the east of the IPB are used for validation.
The IPB, located in the south of Portugal and Spain, is one of the world’s largest polymetallic massive sulphide complexes, with originally >1700 Mt of massive sulphides. The massive sulphides are hosted by a Volcano-Sedimentary-Complex (VSC) formed in a basinal facies during the Variscan orogeny. The VSC overlies the Phyllite-Quartzite Group and is overlain by the Culm Group. The hydrothermal alteration zonation associated to massive sulphide ore bodies is comprised of an inner chlorite-rich and a peripheral sericite-rich zonation. Other VNIR-SWIR-active minerals such as jarosite, calcite, gypsum, dickite/kaolinite and iron-hydroxides occur in the adjacent rocks.
The analysis of the hyperspectral data is performed with the multi-range spectral feature fit (MRSFF) to detect mineral occurrences and the Wavelength Mapper (ITC, Netherlands) to determine absorption feature depths and absorption feature wavelength shifts. The detection of white mica and its mineral chemistry based on the wavelength position of the Al-OH absorption maximum offers reliable results and is in accordance to the field-based analysis results of the AisaFENIX data. The identification of the Fe-OH absorption feature of chlorite and the according absorption maximum wavelength shift due to Mg-substitution in chlorite is less clear and influenced by higher noise levels in the longer SWIR wavelength ranges of the PRISMA data. The mapping of Fe-bearing minerals such as jarosite and goethite coincides with the results of the field-based mineral analysis results, although the identification is hampered by the residual of the water absorption band at 940 nm. The AisaFENIX data of the Los Frailes open pit show the occurrence of dickite/kaolinite and gypsum only in small areas in the northern pit face. The medium spatial resolution of the PRISMA sensor cannot capture reliably these small-scaled occurrences. However, a clear identification of these minerals is expected in areas where they occur in larger extent, as can be seen in Heller Pearlshtien et al. 2021. The calcite identification is also influenced by the decreasing signal-to-noise level in the longer SWIR wavelength region and therefore challenging to capture.
This study shows the potential to use PRISMA L2D data as a fast and cost-effective tool to detect characteristic minerals associated to massive sulphides in the Iberian Pyrite Belt. The results show the capability to identify VNIR-SWIR active minerals and even subtle wavelength shifts of absorption maxima, which are indicators of changing mineral chemistry. However, the detection is influenced by residuals of atmospheric absorptions and a lower signal-to-noise ratio in the longer SWIR wavelength region.
earthbit PRISMA edition is a desktop SW application aimed to the quick management and full visualization of Earth Observation data products with a vertical specialization to the interaction and manipulation of PRISMA hyperspectral mission products.
User can have a simple interface enabling a straightforward interaction with data and meta-data composing the HDF data files. All the spectral bands can be viewed with one-click, meta-data can be searched, interpreted and plotted while the file structure complexity remains transparent. Earthbit also with adds functions for data interpretation, such as signature visualization of each product from each band, pixel geolocation on WGS84 map on the fly, metadata overview and visualization of the additional dataset or plotting of vector attributes.
Earthbit’s next release also include the Python API able to act as a bridge between PRISMA data and python standard libraries. It will also allow the integration of external plug-ins (python and C++) and the implementation of interactive processing workflows with the real-time display of results.
The earthbit development environment was born as a tool able to manipulate very big EO data sources, as SAR and hyperspectral images together with image streams (e.g., live video from drones) in real-time. It allows to create, configure and execute massively parallel processing tasks (specific for satellite imagery or science data) on big datasets by leveraging the power of a proprietary map/reduce framework.
Its Human Machine Interface enables the user to easily interact with algorithms, image data and unstructured metadata and exploit the power of heterogeneous computing devices such as modern multi-core CPUs, GPUs and Accelerators (FPGA and ASICs with OpenCL support). These technologies allow to reach the following benchmarks:
• Load ~4GB image from disk to memory in less than 15s.
• Create image pyramids on the fly, with in-memory caching of tiles.
• Maximize the use of Solid State Disks.
• Execute real-time image filtering at about 400fps on GPU.
It supports simultaneous visualization of different images that can be navigated in co-registration mode, providing real-time graphical operation on them.
By earthbit user can:
• Load datasets and attributes from hierarchical and generic data files (HDF5, HDF-EOS, TIFF, JPEG)
• Visualize and process big images and datasets
• Execute processing and visualization algorithms on multicore CPUs and discrete GPUs, thanks to a proprietary acceleration engine integrating Khronos OpenGL and OpenCL API for parallel applications.
• Plug own algorithms for image processing, exploiting the earthbit SDK features
• Use an editor for Python scripting and product processing and has support to creation of Python plugins
The earthbit SDK provides dynamic linking libraries for different operating systems: Microsoft ® Windows10 (32bit & 64bit), Linux RedHat, Ubuntu Linux, CentOS 7, Gentoo Linux, Apple® macOS Sierra and Mac OS X and running on the following proc. Architectures as Intel/AMD x86 and x86_64, ARM ARMv7-A and ARMv8-A
One of the greatest challenges of the 21st century is the climate change and emissions are one of its biggest drivers. With the ambitious aim that Europe will become the world’s first carbon-neutral continent by 2050 and to make the European Green Deal a reality, it will be crucial to reduce these emissions within the next decade. Here, the energy sector plays a key role as it is accounting for 75% of the emissions. With fossil fuels still supplying more than 70% of Europe’s and 80% of the world’s energy, and the goal to increase the share of energy consumed in Europe coming from renewable sources to 40% by 2030, the energy sector must undergo significant changes. Energy system modelling is looking for solutions that lead to climate-neutral and cost-effective energy systems by evaluating the current state of energy infrastructures and modelling different change scenarios. The scientific analyses that are needed consider a wide range of evaluation criteria. In addition to assessing land potential, identifying suitable sites, considering environmental parameters, balancing land use interests, and capturing trends and impacts on landscaping, a flexible design of the generation, transportation, redistribution and storage of energy between sectors (gas, electricity, heat and hydrogen) is key for a sustainable implementation in line with climate targets. Therefore, the availability of high-quality and up-to-date data on the existing energy infrastructure is an important component for managing the energy transition, but at the same time one of its greatest challenges as these data sets are often not (freely) available or are of poor quality (i.e., incomplete, contradictory, inconsistent).
Against this background, satellite-based Earth Observation represents an increasingly valuable resource for closing this gap as satellite data available today not only have a very high spatial resolution of more than 10 m, but also cover the earth with a very high temporal resolution. Furthermore, state-of-the-art machine learning (ML) techniques (including deep learning (DL)) have proven to be extremely valuable for a variety of applications in different areas of remote sensing and have shown great potential for analysing large-scale challenges such as urbanization or climate. However, thus far no approaches have been proposed in the literature to automatically and effectively map energy infrastructure types on an operational basis by combining C-band SAR and optical satellite imagery. Hence, in our study we present a novel and robust system based on state-of-the-art deep neural networks (DNN) for generating accurate maps of single wind turbine (WT) installations in wind power plants by exploiting multi-temporal statistics of EO-based products from Sentinel-1 and Sentinel-2. Specifically, we extract for each pixel temporal statistics (e.g., temporal maximum, minimum, mean, standard deviation) of different S2 based spectral indices (e.g., vegetation index, built-up index, water, index, etc.) derived after performing cloud masking and S1 temporal statistics of the backscattering intensity for different polarizations and pass types. To reduce processing time in the prediction model generation only those S1/S2 bands and indices are used which have been identified to be the most suitable for detecting energy infrastructure types on the basis of the reference data and evaluating different common separability metrics.
The success of DNNs is greatly influenced by the availability and quality of training data. While there are specific DNN training databases for various applications, there is none of energy infrastructure (i.e., wind, solar, coal power plants) to carry out this task at a global scale. Therefore, existing databases on wind turbines are filtered and exploited to manually collect training and validation samples on a global scale. By collecting training data from locations all over the world, the variety of construction characteristics for the different infrastructure types that exist are covered and hence, a robust prediction model and the transferability for future global analyses is assured. After training samples have been identified and labelled, image chips have been prepared for the predictors. For mapping the wind turbines, a convolutional neural network (CNN) in object detection mode has been employed, which has the advantage that also multiple wind turbines in an image patch can be detected. This is of particular relevance due to the WT detection task, where, due to the small scale of the individual installations, it is likely to encounter more than one turbine per image patch. The performance of the individual models has been quantitively assessed by means of state-or-the-art scoring and evaluation metrics, in specific: accuracy, precision, recall, F1-Score as well as Intersection over Union (IoU) and mean average precision (mAP).
The proposed system holds great potential as it allows to obtain maps on energy infrastructure of higher quality and greater scale with respect to the SOA, which can be ideally employed as a ready-to-use tool for energy modelers.
On April 29th 2021, the Earth Observation (EO) satellite Pléiades Neo 3 was successfully launched. On August 10th, its twin sister, Pléiades Neo 4 joined her in orbit. This marks the entry of European Satellites on the 30cm imagery market. In 2022, Pléiades Neo 5 and 6 will be launched in order to complete the 4-satellite constellation.
Alright, but what is Pléiades Neo? It consists of 4 EO satellites, providing 30cm optical imagery, entirely funded and operated by Airbus Defence and Space. After more than 30 years of experience in satellite imagery services, it seemed like the logical way forward. However Pléiades Neo is also the result of a whole new approach in terms of image quality and satellite capability. It has required rethinking the way we design satellites and exploit their services to answer the most demanding requirements in the field of Defence and Security, ensuring safety of operators and civilians over the world.
Highest precision with massive acquisition
Firstly Pléiades Neo provides 30cm native resolution meaning that the image shot by the satellite is the actual image you receive in terms of resolution. The image therefore provides an incredible amount of details that don’t appear on lesser resolution imagery, for instance: you can make the difference between light and armoured vehicles, see road markings, marks in the sand, people gatherings, distinct animals and people thanks to their shadows. The geolocation accuracy which measure the exact placement of an object on an image, is below 5m CE90. In terms of acquisition capacity, the constellation is able to acquire up to 2 million square kilometers, every single day. Two million square kilometers at 30cm resolution fully dedicated to customers, every day.
Introducing intraday revisit
It is also the first time Airbus provides an intra-day revisit capability within the same constellation. Indeed depending on the incidence angle of the satellite and the latitude of the Area Of Interest (AOI), Pléiades Neo can provide between 2 and 4 revisits per day. More particularly the tests that have been conducted over Tripoli, Lybia, have shown a minimum of 2 revisits per day and a maximum of 3, providing over 28 days a total of 64 revisits.
Ultimate reactivity tasking and image delivery
Work plans are updated every times a satellite enters into S-band contact, be it every 25 minutes (an orbit is 100min, 1h40), or 15 times per day per satellite. It represents around 60 plans uploaded every day at the constellation level.
Work plan are also pooled. This means that when an image is to be collected by one satellite, the related acquisition request is removed from the tasking plans of the other satellites.
These multiple and synchronised work plans per day enable easy handling of last-minute tasking requests – which can be placed up to 15min before S-band contact- as well as integration of the latest weather information, for an improved data collection success rate.
In addition, Airbus Defence and Space’s network of ground receiving stations, enabling an all-orbit contact and thus ensuring near real-time performances worldwide and rapid data-access, ensure the highest standards in terms of reactivity of our service.
Images are downlinked at each orbit, automatically processed and quickly delivered to the customer, allowing faster response when facing emergency situations.
New spectral bands
In terms of spectral bands, Pléiades Neo will acquire simultaneously the panchromatic channels and 6 multispectral bands, which are:
- Deep Blue
- Blue
- Green
- Red
- Red-Edge
- Near Infrared
Red-Edge and Deep-Blue are two additional bands compared to its predecessor Pléiades, which unveil complementary information for respectively vegetation and bathymetry applications.
Finally, the tasking of a VHR satellite orbiting 600 km above the earth has never been easier. OneAtlas, our digital platform, allows the users to draw their AOI, choose Pléiades Neo as optical sensor and choose the date of acquisition while accessing the whole Airbus imagery archive.
By providing more data, more detailed, more rapidly and in a more accessible way, Pléiades Neo becomes the best support numerous markets and in particular European and national Defence and Security missions: from strategic monitoring thanks to increased revisit capability to time sensitive mission preparation thanks to reactive tasking and image delivery.
The Sentinel-3 mission of ESA and the European Commission is one of the elements of the Copernicus programme in response to the requirements for operational and near-real-time monitoring of ocean, land and ice surfaces over a period of 20 years. Its main objectives are to measure sea surface topography, sea and land surface temperature, and ocean and land surface colour with high accuracy and reliability to support ocean forecasting systems, environmental monitoring and climate monitoring. With two optical instruments (SLSTR, OLCI) and the SAR Radar Altimeter (SRAL), accompanied by MWR, DORIS and LRS, these objectives are pursued. Two spacecraft have been launched, model A in 2016 and model B in 2018, and are meeting all their expectations in orbit. The Sentinel-3 mission is jointly operated by ESA and EUMETSAT.
The developments of the recurrent payload models C and D, based on the existing designs, were started in 2016, prior to the launch of Sentinel 3A. The exact launch dates of the C and D models are yet to be formally agreed with the European Commission but are currently planned to take place within the timeframe 2024 to 2028, to ensure a mission continuity of 20 years.
Lessons learned from the previous model developments, new specific requirements (e.g. compatibility with GNSS Galileo bands) as well as the in-orbit commissioning phases have been taken into consideration. Depending on the instrument, this has led to modifications at various development levels: design, manufacturing, assembly, and calibration.
In this paper we present the current status and development highlights of the payload models C&D, the main differences to the previous models, and planned additional activities for further improvements to the mission performance. In the case of SLSTR (Sea Land Surface Temperature Radiometer) and OLCI (Ocean Land Colour Instrument) an overview and comparison of the pre-launch measured performances achieved so far for all models (A, B, C&D) will be presented.
Airbus Intelligence in the UK have been providing Vision-1 Optical VHR data to the commercial market since 2019, and now, Airbus has signed a contract with ESA to ensure that this data can also be provided to ESA Copernicus data users via Additional (ADD) datasets within the ESA Copernicus Programme.
The ESA portfolio of commercial missions contributing to Copernicus (CCM's) is already large, covering SAR and Optical data in seven resolution classes across VHR-1/VHR-2 (resolution " < " 1m and " < " 4m) to LR (resolution " > " 300m). The global appetite for high-quality earth observation data at very high resolution is increasing exponentially, especially in the VHR-1 class (resolution " < " 1m).
Vision-1 imagery consists of 4-band multispectral and panchromatic VHR EO data at a resolution of up to 0.87m acquired by the Surrey Satellite Technology Limited (SSTL) S1-4 Imager. With its recent approval from the Earthnet Data Assessment Pilot (EDAP), and subsequent recommendation for selection as an ESA TPM, Vision-1 has also been accepted into the Copernicus family.
Airbus in the UK has a long history of providing high-quality medium resolution data to global users, commercially and otherwise (e.g. through ESA programmes) via the DMC constellation satellites.
As a VHR optical mission, Vision-1 data can be used across a number of potential applications including precision agriculture, land monitoring, maritime surveillance and infrastructure monitoring, among others.
High resolution monitoring of agricultural fields can give detailed information to farmers over and above the information provided by lower resolution satellites. This can be useful to smallholders, maintaining a large number of smaller fields, as well as providing more detail to larger scale agri-business, allowing them to achieve greater operational efficiencies.
Earth observation data is also becoming an increasingly important tool in the monitoring of ships and other maritime vessels, with a range of applications including fisheries management and monitoring of borders and shipping corridors.
We will present the Vision-1 data offering to the Copernicus Service Providers (CSP's) via the ADD datasets. We will describe the contributions that this mission has already made to the Copernicus CORE VHR_IMAGE_2021 dataset and highlight the potential of Vision-1 imagery to contribute to other Copernicus projects.
As the number of VHR satellite missions grow, so does the level of interest in potential new applications, and we are excited for Vision-1 to become a part of their development via this programme.
With increasing global temperature and growing human population, our home planet is suffering from extreme weather events such as intense rain, floods and droughts and related landslides, rising sea level, and an ever-increasing stress on freshwater availability. While there is a significant body of work on the sources and implications of climate change, analyzing and predicting the impacts and effects on water resources and localized flooding events is still non-trivial. Water resources science is multidisciplinary in nature, and it not only assesses the impact from our changing climate using measurements and modeling, but it also offers science-guided, data-driven decision support. While there have been many advances in the collection of observations, reflected in the fast increase in the Earth Observations archive, as well as in forecast modeling, there is no one measurement or method that can provide all the answers.
The idea behind Digital Twin (DT) is to establish a virtual representation of a system that spans its lifecycle, is updated from real-time data, and uses simulation, machine learning and reasoning to help decision-making. Earth System Digital Twin (ESDT) is an emerging concept that mirrors the Earth Science System to not only understand the current condition of our environment or climate, but also to be able to learn from the environment by analyzing changes and automatically acquire new data to improve its prediction and forecast (Fuller et al. 2020).
The NASA Advanced Information Systems Technology (AIST)’s Integrated Digital Earth Analysis System (IDEAS) project is to establish a comprehensive science platform for our decision makers with science-driven solutions to tackle the global and local impacts due to climate change. For validation and demonstration of IDEAS architecture, the project tackles one of the most fundamental Earth Science challenges related to water cycle science and flood detection and monitoring. As a system of systems, IDEAS brings together some of the advanced technologies and science investments to enable big data analytics, AI/MI predictions, and numerical model simulations from three NASA centers, Jet Propulsion Laboratory (JPL), Goddard Space Flight Center (GSFC), and Langley Research Center (LaRC), along with various observational measurements to enable comprehensive science analysis for actionable predictions. In addition to leveraging NASA technology and data assets, IDEAS is partnering with the Space Climate Observatory (SCO)’s FloodDAM effort for science-driven, federated monitoring, detection and analysis of flood events. As a multi-agency and multi-center Digital Twins effort, the project is tasked to leverage and enhance emerging DT standards to promote interoperability and encapsulation of local infrastructure and technology implementation.
Synthetic Aperture Radar (SAR), with its capability of imaging day or night, ability to penetrate dense cloud cover, and suitability for interferometry, is a robust dataset for event/change monitoring. SAR data can be used to inform decision makers dealing with natural and anthropogenic hazards such as floods, earthquakes, deforestation and glacier movement. However, EO SAR data has only recently become freely available with global coverage, and requires complex processing with specialized software to generate analysis-ready datasets. Furthermore, processing SAR is often resource-intensive, in terms of computing power and memory, and the sheer volume of data available for processing can be overwhelming. For example, ESA's Sentinel-1 has produced ~10PB of data since launch in 2014. Even subsetting the data to a small scientific area of interest can result in many thousands of scenes, which must be processed into an analysis-ready format.
The Alaska Satellite Facility (ASF) Hybrid Pluggable Processing Pipeline (HyP3) was developed to provide cloud-native processing of Sentinel-1 SAR data to the ASF user community at no cost to users. Computing is done in parallel for rapid product generation, easily producing hundreds to thousands of products an hour. HyP3 is integrated directly into Vertex, ASF's primary data discovery tool, so users can easily select an area of interest on the Earth, find available SAR products, and click a button to send them (individually, or as a batch) to HyP3 for Radiometric Terrain Correction (RTC), Interferometric SAR (InSAR), and more. Each process provides options to customize the processing and final output products, and provides metadata-rich, analysis-ready final products to users. In addition to the Vertex user interface, HyP3 provides a RESTful API and a python software developers kit (SDK) to allow programmatic access and the ability to build HyP3 into user workflows.
HyP3 is an open source, and openly developed, processing platform developed for use in the Amazon Web Services (AWS) cloud. It has been designed to have minimal overhead costs (serverless design), be easily deployable using cloud-formation templates (infrastructure as code), and to allow scientists and users to develop new processing plugins. Due to these features, Hyp3’s increasingly been used to provide project (grant) specific processing capabilities not limited to SAR data. These science support projects typically utilize custom deployments into project-based AWS accounts, allowing science teams to quickly and easily develop new algorithms/products, control processing/product access, provide project-based cost accounting, and leverage AWS cloud credits provided by funding agencies, all without needing to be cloud architects/engineers.
The amount of data that must be processed in satellite missions is increasing over time, directly affecting the hardware resources and time required to carry out this processing. With more than 11 years in orbit, the SMOS mission has a lot of over-sampled data, which implies a more intensive use of the CPU and greater use of disk space if the processing is done without any type of data management. For this reason, it is increasingly necessary to optimize the resources involved in the processing of large volumes of data. Such optimizations include minimizing the processing time, achieving maximum efficiency of computational resources, and doing a good management of the generated data, both to make it more accessible and to optimize the disk space it demands.
This work presents different techniques that can be applied when designing software architectures for the particular case of the SMOS Sea Surface Salinity data processing. A study is made on how the data can be aggregated and ordered in the first stages of processing to reduce the processing time of the following stages and the disk usage of intermediate products.
The SMOS measurements can be easily divided into smaller independent processing units (such as a half-orbit or a snapshot, which is even smaller and still independent of the other snapshots). The granularity of the data allows the processing to be divided into very small pieces that can be executed in parallel, making an optimal use of CPU resources and reducing the total processing time. Disk operations, such as reading and writing files are also a big part of the processing time. Data has been arranged in a way in which disk operations are minimized (avoiding multiple reads of the same file) .
Preliminary results show an improvement of the 20% of computational time and a reduction of the 40% of the required disk space with respect to the current implementation of the Barcelona Expert Center internal data processing chain.
Land surface temperature (LST) is widely recognized as an important variable for the description and understanding of surface processes. The temperature of the interface between the soil and the atmosphere is a crucial element of the surface energy balance, determining radiation loss and being closely linked to the partitioning between latent and sensible heat fluxes. As such, satellite-derived LST is being increasingly used in various applications related to the assessment of land surface conditions, including the assessment and improvement of land surface schemes in numerical weather prediction models, in the estimation of evapotranspiration, and in the monitoring of plant water stress or drought extent.
The Landsat series of satellites have the potential to provide LST estimates at high spatial resolution that are particularly appropriate for local and small-scale studies. Numerous LST algorithms for the Landsat series have been proposed. While most algorithms are simple to implement, they require users to provide the necessary input data and calibration coefficients, which are generally not readily available. Some datasets are available online, however, they generally require users to be able to handle large volumes of data. Google Earth Engine (GEE) is an online platform created to allow remote sensing users to easily perform big data analyses without the need for computation resources. All Landsat Level-1 and 2 data are directly available to GEE, including top-of-atmosphere (TOA) and surface reflectance (SR) data. However, until now high resolution LST datasets from Landsat have been unavailable in GEE.
Here we describe a methodology for deriving LST from the Landsat series of satellites (i.e. Landsat 4, 5, 7 and 8) which is fully implemented in GEE. We provide a code repository with all the GEE scripts necessary to compute LSTs from Landsat data. The repository allows users to perform any data analysis they require within GEE without the need to store data locally. The LST is computed using the Statistical Mono-Window (SMW) algorithm developed by the Climate Monitoring Satellite Application Facility (CM-SAF). Besides Landsat data, the LST production code makes use of two other datasets available within GEE: atmospheric data from re-analyses of the National Center for Environmental Prediction (NCEP) and National Center for Atmospheric Research (NCAR) and surface emissivity from the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Emissivity Database (ASTER GED) developed by the National Aeronautics and Space Administration’s (NASA) Jet Propulsion Laboratory (JPL).
The ever-increasing amount of medium and high spatial resolution satellite data provides new opportunities for analyzing changes in land cover and land surface condition, but large data volumes also present new challenges for scientists and remote sensing analysts. There have been tremendous efforts in recent years to develop tools and infrastructure for processing big Earth Observation data. However, big data workflows are not easily adapted, which can cause a software- and/or hardware-infrastructure lock-in. Satellite data analysis is tightly coupled to both specific input data, processing back-ends and execution environments, which makes their re-use with changing inputs and on different platforms cumbersome. Furthermore, satellite data analysis workflows often include long and complex tasks with heterogeneous resource requirements regarding the computational infrastructure they are run on, which often leads to hard-wired implementations. Alternatively, workflow engines have recently emerged which address the issues of portability (not bound to specific infrastructure), adaptability (automatically adapt to varying infrastructure and data) and dependability (constraints to warrant correct execution) (Leser et al. 2021). Workflow engines are widely used in other computation-heavy sciences such as bioinformatics, but the concept is still new in remote sensing.
The overall goal of our work is to implement and test data analysis workflows for analyzing land cover changes in the workflow engine Nextflow (Di Tommaso et al. 2017). Specifically, the objectives are: (1) to map annual land cover between 2000 and 2020 across Germany using integrated Landsat and Sentinel-2 times series and the harmonized European-wide Land Use and Coverage Area frame Survey (LUCAS) (d’Andrimont et al. 2020) (2) to develop Nextflow workflows that leverage a broad range of existing, already widely used open source tools and programs and (3) to evaluate the execution performance of Nextflow workflows. For preprocessing, we leverage the capabilities of the Framework for Operational Radiometric Correction for Environmental monitoring (FORCE) (Frantz 2019), which includes geometric matching between scenes, atmospheric and terrain correction, cloud masking and BRDF-correction. For further processing, including generation of higher level ARD products, computation of spectral indices/spectral-temporal metrics, training of machine-learning algorithms and map prediction, we focus more closely on other popular tools such as QGIS' (QGIS Development Team 2021) command line-interface, the EnMAP-Box (EnMAP-Box Developers 2019), R/Python extensions and other open source libraries/software. These tools were chosen due to their widespread usage in the EO-community combined with aforementioned benefit of workflow engines to integrate pieces of existing analysis pipelines from various sources. Our approach generates three key results: Firstly, we develop a method to map historic land cover time series from national to continental scale. Secondly, a modular workflow tailored towards analysis of big Earth Observation data with a low barrier for reusability is built. Thirdly, we generate a better understanding of needs specific to diverse remote sensing analysis tasks when implemented in a workflow engine like Nextflow and thus complement existing findings (Lehmann et al. 2021).
References
Leser, U., Hilbrich, M., Draxl, C., Eisert, P., Grunske, L., Hostert, P., Kainmüller, D., Kao, O., Kehr, B., Kehrer, T., Koch, C., Markl, V., Meyerhenke, H., Rabl, T., Reinefeld, A., Reinert, K., Ritter, K., Scheuermann, B., Schintke, F., Schweikardt, N., and Weidlich, M.(2021). The Collaborative Research Center FONDA. Datenbank-Spektrum 1610-1995. doi: 10.1007/s13222-021- 00397-5.
Lehmann, F., Frantz, D., Becker, S., Leser, U., and Hostert, P. (2021). “FORCE on Nextflow: Scalable Analysis of Earth Observation data on Commodity Clusters”. In: Proceedings of the CIKM 2021 Workshops. Online.
Di Tommaso, P., Chatzou, M., Floden, E. W., Barja, P. P., Palumbo, E., and Notredame, C., (2017). Nextflow enables reproducible computational workflows. Nat Biotechnol, 35, 316-319. doi: 10.1038/nbt.3820
d’Andrimont, R., Yordanov, M., Martinez-Sanchez, L., Eiselt, B., Palmieri, A., Dominici, P., Gallego, J., Reuter, H. I., Joebges, C., Lemoine, G., and van der Velde, M. (2020). Harmonised LUCAS in-situ land cover and use database for field surveys from 2006 to 2018 in the European Union. Scientific Data 7.1, p. 352. issn: 2052-4463. doi: 10.1038/s41597-020-00675-z
Frantz, D. (2019). FORCE—Landsat + Sentinel-2 Analysis Ready Data and Beyond. Remote Sensing 11.9, p. 1124. doi: 10.3390/rs11091124.
QGIS Development Team (2021). QGIS Geographic Information System. QGIS Association. https://www.qgis.org .
EnMAP-Box Developers (2019). EnMAP-Box 3 - A QGIS Plugin to process and visualize hyperspectral remote sensing data. https://enmap-box.readthedocs.io
The continuously increasing amount of long-term and of historic data in EO facilities in the form of online datasets and archives makes it necessary to address technologies for the longterm management of these data sets, including their consolidation, preservation, and continuation across multiple missions. The management of long EO data time series of continuing or historic missions, with more than 20 years of data available already today, requires technical solutions and technologies which differ considerably from the ones exploited by existing systems.
The ESA project LOOSE (Technologies for the Management of LOng EO Data Time Series) enables investigating, testing and implementing new technologies to support long time series processing.
For specific tasks (such as ingestion, discovery, access, processing, analysis of EO data) a multitude of completely different mature open source components is usually available. LOOSE aims at combining functionally similar solutions from different heritages into one comprehensive framework. LOOSE even supports parallelism in a way that multiple solutions for the identical task are available and the application developer is invited to chose between these different components during implementation (e. g. "GeoServer" versus "EOXServer").
In addition, LOOSE partners extended well-known existing components with new capabilities (=interfaces) to support efficient ingestion, discovery, exploitation optimized access, processing and optimized analysis of EO data timeseries. For example, GeoServer was extended with the capability to handle STAC metadata.
Overall outcome of the project is a "blueprint architecture concept" which focuses on the interfaces between components and takes innovative concepts such as Bulk data retrieval from dedicated archives, OGC's Data Analysis Processing API and Data Cubes offering Discrete Global Grid Systems into consideration (see enclosed viewgraph).
LOOSE partners are DLR (Oberpfaffenhofen), EOX (Vienna), Terrasigna (Bucharest) and Mundialis (Bonn).
The LOOSE system architecture is inspired by the EO Exploitation Platform Common Architecture (EOEPCA) and focuses on the technological evolution of selected services that enable the end-to-end workflow from retrieving long-term archived EO products to the extraction of high-level information based on processed value-added datasets. The architecture and interoperability is evaluated within LOOSE by using different implementations of these services (e.g. EOxServer and GeoServer) and deploying the whole system on two different infrastructures (DLR/LRZ and Mundi/OTC). The complete LOOSE infrastructure is built on Kubernetes and is therefore well transferrable between different cloud providers.
One of the major goals of the system design (see enclosed figure) is to define services (indicated in blue) as functional components with their internal (purple) and external interfaces (yellow).
The validity of the LOOSE blueprint architecture is demonstrated in three different real-world application pilots.
These applications are covering totally different thematic areas:
- Agricultural monitoring (based on Sentinel-1 and -2 data),
- monitoring urbanization globally (also based on Sentinel-1 and -2) and
- supporting fishery in the Black Sea (multi sensor approach, including in situ-data).
The agriculture use case applies Sentinel-1 and -2 time series in combination with land parcel information to monitor agricultural practices can be monitored and verified, e.g. the presence/absence of mowing in grassland, or the occurrence of ploughing during a specific seasonal time window or the rapid growth of vegetative cover during certain time period. In the context of the European Common Agricultural Policy (CAP), subsidy claims from the farmers require an in-depth check for their eligibility.
This use case specifically requires:
- Handling of and operations on very large vector datasets (filtering, buffering, grouping, merging);
- SAR and optical time series profile extraction through aggregation at land parcel level;
- Implementation of specific eligibility checks according to national CAP and LPIS requirements.
DLR’s World Settlement Footprint (WSF) suite determines built-up areas on the basis of Earth Observation data derived from Sentinel-1 GRD strip map datasets, Sentinel-2 and Landsat 4/5/7/8. Determination of the WSF is performed by evaluating high resolution backscatter ratios between different channels. Timeseries analysis of obtained results is necessary to smooth the computed index values with respect to time and to yield reliable values. This enables to detect the urban growth wordwide.
This pilot application aims at user-driven
- production of the required backscatter indices (for selected periods) performed via eWPS ("black box processing") and
- analysis via Data Analysis and Processing API (DAPA).
LOOSE Marine Pilot will process EO data as well as in-situ and numerical models outputs to accurately identify Potential Fishing Zones (PFZ) around Romanian and Bulgarian coastal areas to support efficient fishery.
In the LOOSE blueprint architecture user-driven "black box" processing is evaluated against user-defined "white box" analyses with respect to useability and performance. "Black box" processing refers to applying a pre-defined retrieval algorithm (processor) on the EO raw data where the user only has limited possibilities to influence the processing settings (such as only selecting the processing time period). "Black box" processing is investigated by using the eWPS, which is provided by the partner Terrasigna. In contrast, white box" processing provides the users the ability to provide an algorithm-graph to the LOOSE system by using the openEO / Actinia / GRASS GIS interface provided by LOOSE partner mundialis.
The LOOSE blueprint architecture concept provides all relevant functionalities (ingestion, discovery, processing, analysis) and can therefore be considered as blueprint for Kubernetes-based operational processing systems.
The JRC Big Data Analytics Platform aims at linking data, data scientists, thematic and policy experts to generate policy relevant insights and foresight. A common denominator of most data analysed in this context is that they refer to a location both in time and space. When it comes to data volumes, the largest share consists of geospatial data in the form of raster images or vector files. Indeed, geospatial data play a fundamental role to answer key societal questions related to our environment at local and global scale when it comes to climate change, biodiversity, deforestation, agriculture, pandemics, etc. In addition, an integrated approach to data analytics is needed to tackle these complex questions in view of determining causal effects between mutually dependent variables. The need for frequent satellite image acquisitions in different spatio-temporal and spectral resolutions to answer these key environmental questions motivated the European Union to launch the Copernicus programme that is now delivering a continuous stream of free, full, and open data and largely contributed to Earth Observation entering the big data era.
In this contribution, we will present how the JRC Earth Observation Data and Processing Platform (JEODPP) evolved into a multi-purpose data infrastructure (called Big Data Anatlytics Platform, BDAP) serving the needs of the Joint Research Centre across all its knowledge production and knowledge management activities. This evolution was enabled thanks to the versatility of the services of the JEODPP. The presentation will focus on the following key ingredients required for addressing the challenges posed by the effective and efficient extraction of insights and foresight from geospatial data and data associated with a location both in space and time:
- Advanced data cataloging and data access following FAIR data principles including their application to analysis pipelines/workflows;
- MLOps for Machine Learning Operations;
- Interactive data science for fast prototyping;
- Scalability and flexibility for analysis at any scale;
- Exploratory visualisation for both data discovery and dissemination of insights to non-experts;
- Open-source coding for transparency, reproducibility, and accountability;
- Collaborative tools to ensure that not only data but also scientists are linked.
The successful combination of these ingredients will be illustrated with a series of actual use-cases from continental to global scale combining heterogeneous data from multiple sources as well as generic services such as interactive dashboards for exploratory and agile visualisation based on Voila extension of JupyterLab and the pyjeo open source library for geospatial data analysis.
The talk will conclude on future perspectives in the framework of ScienceMesh for the European Open Science Cloud currently developed in the CS3MESH4EOCS project coordinated by CERN with participation of numerous partners including JRC.
In recent decades, the increasing availability of orbiting satellites and Earth Observation data has favored the development of a large number of applications in a wide range of fields, from monitoring environmental changes to the identification of pollutants, from the study of the interaction between ecosystems to the prevention of and response to natural disasters. Many applications use Earth Observation data from different satellite missions, exploiting their potential and synergy. In this context, in order to meet the often stringent requirements, Earth Observation instruments must be able to ensure the most accurate, reliable and consistent measurements throughout the mission.
Moreover, the combined and synergistic use of data from various missions becomes essential, thus requiring precise co-registration and inter-calibration operations between instruments to normalise the response of the different sensors on the basis of a common reference.
Planetek provides the possibility to correct residual geometrical deformation and radiometric inaccuracies in optical, multi-spectral, images by means of a knowledge base with a worldwide coverage. This base is built thanks to the combined usage and the fusion of satellite data from different missions and relies on accurate information automatically extracted, and regularly updated, in specific “ground control truths”. The control points are characterized by the precise knowledge of their geometrical position or radiometric response.
The service exploits the information extracted by long time series to increase the geometric precision and radiometric stability. It is provided on a cloud infrastructure and can be integrated in any standard mission payload data ground segments workflow
In September 2020, ESA launched a new Virtual Lab focusing on Agriculture (AVL). Virtual labs are platform services for scientists to share data resources and create an enhanced research environment. AVL is designed to be an online community open science tool to share results, knowledge and resources. Agriculture scientists can access and share Earth Observation (EO) data, high-level products, in-situ data, as well as open-source code (algorithms, models, tools) to carry out scientific studies and projects.
The technical system behind the AVL comprises two main building blocks, namely the “Thematic processing subsystem” powered by TAO (Tool Augmentation by user enhancements and Orchestration), which is an orchestration and integration framework for remote sensing processing, and the “Exploitation Subsystem” powered by xcube and Sentinel Hub, a software for generation, management, exploitation, and service provisioning of analysis-ready data cubes.
The “Thematic processing subsystem” is a collection of self-contained (i.e., packed in Docker containers) applications or systems, that produce value-added EO products such as biophysical variables, crop masks, crop types, etc. It integrates commonly used toolboxes (e.g., SNAP, Orfeo Toolbox, GDAL, Sen2-Agri, Sen4CAP, etc.) into a single environment enabling end-users to define by themselves processing workflows and to easily integrate additional processing modules.
The “Exploitation subsystem” ingests data streams including the ones provided by the Thematic processing subsystem and makes them available as analysis-ready data cubes. Data streams may be gridded, like EO sensor data or model data, or feature data, like time series of points or shapes. The latter are stored in geoDB, a database for various data types with geographical context. The “Exploitation subsystem” provides users with individual workspaces and offers different interfaces, specifically the data cube toolbox Cate, a Jupyter Lab environment, and the interface to the thematic processing subsystem.
The implementation of the AVL system is following an agile approach, prepared to account for new requirements, particularly from relevant users from the agriculture science community. With respect to the onboarding of users, the project is structured into three phases. First, a couple of well-defined user stories provide the requirements for the implementation of the first use cases via iterative development cycles. These use cases are executed in partnership with Champion Users who are leading scientists belonging to the community and/or international stakeholders (JECAM, GEOGLAM, CGIAR, GEWEX, FAO, GEO).
The first use case is about the portability of classification models in space (i.e. from one region to another) and over time (i.e. from one year to another), which would certainly be one of the best options to deal with the in-situ data scarcity. Different methodologies to transfer the classification models exist: identifying and using invariant features that are valid in the source and target domains, aligning the time series between the two domains (using for instance time warping), training the classification model by using (i) data from source and target domains together or (ii) data only from the source and then adapt the model to the target domain by fine-tuning it on the available target train data. These options are evaluated over two test sites in Belgium and France and two years 2019 and 2020, based on Sentinel-1 and Sentinel-2 sensors and in situ data coming from the French and Belgian Land Parcel Identification System datasets. This first use case can then be expanded over more sites, involving the JECAM community.
The second use case is about the monitoring of sustainable agricultural practices supporting the necessary evolution of agriculture to become more compatible with the expectations of society at large and with the Green Deal ambitions at the European level. Within this use case, crop-specific monitoring at field-level throughout the year is carried out to monitor a selection of sustainable agricultural practices: winter cover crop and biomass indication, harvest/destruction detection, bare soil period detection, evapotranspiration retrieval as an indicator of water stress.
The third use case will be either about the estimation and forecast of crop yield or an inter-comparison exercise of crop maps within the GEOGLAM initiative.
The second development phase will involve Early Adopters as the first external scientists using and testing the AVL. While the advanced science use cases cover hot topics in the Agriculture Science to maximize the impact in terms of AVL in the community, the Early Adopters studies will demonstrate that AVL can be useful for a variety of applications, offering a large diversity and huge amount of input EO and non-EO data and providing a unique and innovative collaborative framework to access, process and visualize these data. Their feedback will support the transition to the third, operational phase, which will open the AVL to the wider scientific community.
One of the keys to the AVL's success will be the data offer - satellite data, in-situ data, thematic products and auxiliary data. A comprehensive user survey has been conducted in the first months of the project, identifying the users’ priorities and maximizing the offer of relevant data for the agriculture science community will remain a focus throughout the project. Furthermore, as Open Science project, the AVL will promote and foster the collaboration between scientists and the sharing of data, products, results and source code (joint publications, inter-comparison exercise, benchmarking, etc.). Specific activities will also be carried out to build a strong AVL user community and facilitate Open Science, such as regular webinars, a dedicated forum, and the organization of hackathons or competitions.
Sentinel-3 (S3) will be flying in tandem with the photosynthesis mission FLEX, and therefore information captured from this satellite related to the status of vegetation will be crucial. Given the opportunities that cloud computing platforms offer for processing Earth observation data, we chose Google Earth Engine (GEE) to develop a workflow for spatiotemporal mapping of vegetation over Europe. GEE hosts a multi-petabyte data catalog with parallel computation service, manageable from an easy front-end interface. We used the machine learning method Gaussian processes regression (GPR) to train and implement hybrid retrieval models into GEE. By default, GPR is not part of GEE and thus adaptations have been implemented according to the procedure described in Pipia et al, (2021). GPR has been proven to be an outstanding method for prediction tasks, excelling for its ability to provide uncertainties on the predictions. In this way, an assessment of quality can be implicitly performed.
GPR retrieval models for the following key variables were developed: leaf chlorophyll content (LCC), leaf area index (LAI), fraction of absorbed photosynthetically active radiation (FAPAR) and fraction vegetation cover (FVC). For training the models, we used a simulated top-of-atmosphere (TOA) radiance dataset, upscaled from top-of-canopy (TOC) reflectance with the 6SV radiative transfer model (RTM). Simulations at TOC were performed with SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) over a wide range of vegetation properties, driving to hybrid models that take advantage of physical principles of RTM with the efficient computational performance of machine learning. The TOA radiance simulations were then resampled to the band settings of S3-OLCI. After tuning and improving, the final models (v.1.1) were evaluated with a R2 ranging between 0.60 (LAI) and 0.99 (FVC).
By implementing the final models into GEE for processing S3-OLCI TOA data, monthly averaged maps and spatial-averaged time series were generated, and have been compared against matching vegetation products coming from the CGLS (Copernicus Global Land Service) and LPDAAC-NASA (Land Processes Distributed Active Archive Center). Our retrieved outputs present spatial and temporal patterns close to the official products. FVC and FAPAR were retrieved most robustly when analyzing the associated uncertainties (absolute deviations constrained in the range 0 up to 0.3 on the scale 0-1).
The obtained maps show different ecoregions of Europe: natural areas are distinguished with expected values ranges. Time series were calculated as spatial averages over delimited land covers according to the Corine classification for the time window between April 2016 to November 2020. They also display consistent vegetation temporal patterns, with peaks reached during Spring-Summer, depending on the land cover type. In order to generate continuous time series from discontinuous data , e.g. due to cloud cover, we applied the same GPR principles in the temporal domain, making predictions based on the existing data time series.
A challenge in this work was to define the optimal dataset for training a smoothed model avoiding to incur in memory problems during large dimension matrix operations, at the same time that large variability needs to be covered when working at continental scale. A question that opens a future line of work in optimization problems for mapping at large scale, particularly when using computational resources with a limited quota on GEE. At the same time, this work opens opportunities for global monitoring and facilitates to correctly interpret the fluorescence signal acquired by the upcoming FLEX mission.
In the new mega-constellation paradigm for Earth Observation, the objective is to maximize payload downlink while maintaining consistent communications with the entire constellation. This is a challenging landscape as the number of satellites far exceeds the number of antennas. Planet’s Dove constellation is a clear example of this, with more than 180 imaging satellites and a network of around 15 downlink capable antennas. In this context, and given the fact that satellite downlinks cannot happen in parallel if using the same antenna, deciding on which contacts get allocated requires intelligent problem formulations. Of the multiple objectives that can be optimized when configuring the schedule of the fleet, data down is a complex metric, because the amount of data that each satellite has stored on board at the time of the potential downlink depends on multiple things: its previous imaging and downlink activities, its storage capabilities and the data rates that it can achieve. In this presentation, we will describe the model used by Planet to take all these aspects into account and optimize the schedules of the satellites to maximize the amount of data that is downlinked, making the most of the ground network.
The Food Security Thematic Exploitation Platform (FS-TEP) provides a platform for the extraction of information from EO data for services in the food security and agriculture sector to support sustainable food production from space. The platform addresses a wide-ranging community, including industry, service providers, developers, scientists and public sector or governmental organizations. The technical infrastructure is a web-based Platform-as-a-Service (foodsecurity-tep.net/app), developed by CGI Italy, which leverages cloud computing technologies on the background of the CREODIAS. Acting as an interface between Copernicus (and complementary) EO data and the user community and providing the technical solutions for developing and operating services via an Open Expert Interface, the Food Security TEP is attractive for new enhancements and developments from science and data analytics.
Within the last year, evolutions of platform functionalities have been carried out, connecting with international projects, initiatives, and companies. As one major milestone, the integration of the Deep Learning platform Hopsworks (https://www.hopsworks.ai/) to the Food Security TEP was achieved. This open-source platform for Data-Intensive AI provides an environment to create, reuse, share, and govern features for AI, as well as manage the entire AI data lifecycle in the development and operation of machine learning models and includes a Feature store. Via federation between the Food Security TEP and Hopsworks, the full breadth of Copernicus EO data and Copernicus Services products available on CREODIAS, GPUs for the training phase of machine learning, and a scalable computational environment for running operational algorithms - after they have been trained through machine learning - are offered to the EO and science community.
Stimulated the Horizon 2020 Extreme Earth project and under support of the CCN evolutions of the ESA contract, the Food Security TEP and Hopsworks capabilities, using e.g. Sentinel-2 time series, machine learning, and applications concerning crop type mapping and crop monitoring have been successfully developed and demonstrated. The aim was – not only in this pilot - to enable data scientists and service providers working with Earth Observation data to make use of the full spectrum of big data processing, machine learning tools and deep learning architectures, to provide information of high relevance and usability in the agricultural / food-security sector.
As a pilot application in the context of the Extreme Earth project, the team set up a service chain for larger parts of the Danube catchment, applying multi-year Crop Type Mapping, using pre-trained deep LSTM and Sentinel 2 Image Time Series, to provide crop type information in practical details according to farming practice. Those datasets have been used for the precise assessment of the water stress and irrigation demands, based on LAI time series and crop growth simulations.
An amount of more than 5.500 Sentinel-2 L1B datasets, running through an automatic pre-processing on the Food Security TEP, including atmospheric correction, cloud and cloud shadow masking, have been provided as input to a) the training steps, and later on in larger extend to b) the crop type mapping for the seasons 2018, 2019 and 2020. In preparation of the model inference / the application of the Hopsworks trained (using GPU, and INVEKOS / LPIS crop information) classification algorithm, the single time series data of Sentinel-2 data have been converted to monthly composites and stored in a collection. The classification performance (using CPU processing on the FS-TEP) was accomplished by retrieving the classification model from Hopsworks (at a Creodias installation) and the final classification process is done on FS-TEP again. Results were re-processed to crop map raster data (as tif). The produced crop information, derived for 16 classes, e.g. differentiating maize, soy, barley, rye, rapeseed, spring cereals, winter wheat, etc., have additionally been validated against LUCAS (Land Use and Coverage Area frame Survey) database information.
The steps of retrieving models from Hopsworks and running the model inference as a Food Security TEP processors, had been designed as universal components, to be integrated and applied on future service implementations. Having designed the integration between EO processing and AI trained model inference in a universal way, the transfer to other applications using Extreme Analytics on the Food Security TEP are now enabled and will promoted. Developments and services on and from the Food Security TEP can be performed on commercial basis or via a sponsorship using the Network of Resources (NoR) from ESA.
Within the presentation, the concept, the design of Hopsworks federation with Food Security TEP services, and along the example application of crop type mapping, and the general new capabilities in Extreme Analytics for food security topics will be given.
The Food Security-TEP is funded by ESA under contract nr. 4000120074/17/I-EF; Extreme Earth was funded by European Union’s Horizon 2020 research and innovation programme under grant agreement No. 825258
Platforms for the Exploitation of Earth Observation data have been developed by public and private companies to foster the usage of EO data and expand the market of Earth Observation-derived information. All platforms have their user communities, but we are still in the pioneering phase rather than at mainstream usage level. This presentation will discuss which obstacles need to be tackled to boost platform usage on the federation level across platforms. The federation perspective is crucial as many major challenges require the integration of data or services from several platforms. As an example, more and more Disasters are linked to climate change, climate change impacts infrastructure, infrastructure again is linked to land use and land use is linked to public health. Currently the data related to all these topics are fragmented over several infrastructures and it is very likely that they will never be available on a unique one. A federation of platforms will not only enable technical interoperability, but also the multidisciplinary cooperation that is so critical to Climate change impacts management for instance.
This is particularly true in Europe where the resources are quite fragmented and approaches such as the Network of Resources and common design principles such as Earth Observation Exploitation Platform Common Architecture (EOEPCA) have great potential to help growing user communities, as they promise relevant resources at hand, interoperability between platforms, and hidden complexity to allow existing and new users to focus on their challenges rather than technology. So how do we make the most of our platforms? How do we grow them towards mainstream use?
This presentation will explore what progress has been made over the last four years within the OGC geospatial community and describe how these experiences need to be further aligned with developments in neighbouring disciplines such as climate change or disaster management. It further analyzes how platform developments and international efforts such as DestinE, GaiaX, COPERNICUS and Data Spaces need to evolve in order to create an efficient and functional platform environment.
Cloud-based services introduce a paradigm shift in the way users will access, process and analyse Big Earth data in the future. A key challenge is to align the current state of how users work with the data with the general trend by the data providers to guide users towards cloud-based services. Due to the increased availability of Big Earth data, a more diverse user base wants to take advantage of it leading to a diversity of new user requirements.
In order to get a better insight in the requirements and challenges of current and new users of Big Earth data regarding cloud services and to better sense the motivation of those to migrate to it, we conducted a comprehensive web-based user survey. Our results, with a focus on users of Big Earth data in Europe and North America, reveal that a majority of survey respondents still download data onto their local machine as well as handle and process data locally with a combination of programming and desktop-based software. In this context, survey respondents are facing severe problems related to the growing data volumes, the data heterogeneity and the limited processing capacities to cope with their demanding applications. Even though survey respondents show a specific interest in using cloud-based data services in the near future, survey outcomes reveal a low literacy in cloud systems and a lack of trust due to security concerns as well as an opacity of costs incurred.
Based on the survey findings, we see a strong need to establish an international consortium among Earth data organisations and cloud providers to make the current Big Earth data landscape more FAIR (findable, accessible, interoperable and re-usable). We specifically propose four key areas of activities: (i) to bring together Big Earth data and cloud-service provider to foster collaboration towards interoperability of cloud-based services, (ii) to define best practices and identify existing gaps on interoperability of cloud-based services, (iii) to develop and implement a quality certification for cloud-based services to build up trust in cloud service use and (iv) to coordinate capacity-building with the aim to build up cloud literacy, technical competencies and to foster adoption.
The Food Security Thematic Exploitation Platform (TEP) provides a platform for the extraction of information from EO data for services in the food security sector mainly in Europe & Africa. It went operational in 2019 offering a range of data sets and tools to foster smart, data-intensive agricultural and aquacultural applications in the scientific, private and public domain. The initial focus was set to offer satellite data archives fitting agricultural needs, complimentary datasets relevant for agriculture, standard tools for EO processing and analysis (e.g. toolboxes and environments) as well as dedicated services, pre-processed satellite products like leaf area, chlorophyll content of crops and last but not least, a simplified Docker developer interface to work with own algorithms and scripts. Options to share and publish data, as well as accounting options (TEP coins), provide capabilities for the interaction with colleagues, users and customers. The services of the Food Security TEP have also been made available through the Network of Resources (NoR), so sponsorship for scientists and developers can be requested.
In the past year, the Food Security TEP team implemented several additional evolutions, which bring new relevant tools for agricultural analyses to the platform:
• Integration of Sen4Cap: The Sentinels for Common Agricultural Policy - Sen4CAP project aims at providing to the European and national stakeholders of the CAP validated algorithms, products, workflows and best practices for agriculture monitoring relevant for the management of the CAP. The integration of the Sen4Cap framework into the Food Security TEP platform allows to exploit its capability of processing large scale of data with a data driven approach, which in combination with its processing and analytics enables the community to produce well-tested large-scale agricultural analyses without having to implement their own algorithms. Food Security TEP now offers the ability to discover and invoke the following integrated Sen4Cap processors: Sentinel-1 and Sentinel-2 pre-processing, LPIS (parcel) Data Preparation, L4A Croptype, L4B Grassland mowing.
• Federation with PROBA-V MEP, example crop calendar service: During the GEOGLAM Executive Board meeting from October 2019, the EAV (Essential Agricultural Variable) crop calendar was identified as a suitable showcase, given its relevance in many monitoring systems, to provide improved indicators for agricultural monitoring in an operational setting. The crop calendar was developed in the frame of the VITO E-SHAPE project, demonstrating its capabilities in providing reliable crop calendar information at a high spatial resolution, using Sentinel-1 and Sentinel-2 imagery. The crop calendar is deployed as a service on the PROBA-V MEP platform and since 2021 is now also discoverable from the Food Security TEP. Food Security TEP user can access the service from the TEP and execute it as standard WPS-process. Users can visualize and exploit processing results provided by the PROBA-V MEP service directly from the Food Security-TEP Web Interface, download the processing results produced by the PROBA-V MEP service for an offline post-processing activity, and share their processing results with other Users of the platform. Other services offered on the PROBA-V MEP are now also discoverable in the same fashion on the Food Security TEP.
• Integration of AI Tools: New methods of information gain and processing of EO data are becoming ever more popular in agricultural analyses. With the increasing capabilities of Deep Learning (DL), Machine Learning (ML) and other advanced methods of AI, new options for applications in EO analysis arise. Results of computer-based Image Recognition (using convolutional neural networks) exceed the results of human performance since about 2015 and may be the fastest paradigm shift in technology history. Hence, Food Security TEP has integrated a state-of-the-art Enterprise Platform, Hopsworks which in turn integrates popular platforms for data processing such as Apache Spark, TensorFlow, Hops Hadoop, Kafka, and many others, that is used by numerous experts and developers in Europe. Users of Hopsworks are enabled to design, run and implement their AI training, analysis and service models in a scalable way, but Hopsworks itself doesn’t offer fast satellite data access. Due to the explicit interfacing between Hopsworks and the Food Security-TEP, a best possible effectiveness of EO data exploitation in agriculture with ML/DL techniques can be achieved.
• Possibility to monitor events: Many of the Food Security-TEP’s functionalities are focusing on algorithm deployment, with little emphasis on visualisation in the platform. The Event Monitor changes this. With this new feature, users can quickly discover and browse Events, quickly browse Events contents, easily visualize time series of products and combine different service output into a single operational view. Food Security TEP already provided the possibility to define different systematic processing templates which combine different services which will run in a data driven or time driven logic. Based on these templates users can now define a new monitor activity by selecting the template, an AOI and TOI, and customize the template for the input (e.g., changing the cloud coverage). The Monitor supports the visualization of a monitored Event by collecting all the relevant information associated to the instance of the systematic processing, allowing for a quick and easy timeseries overview.
• Integration of thermal data: fast-track access to data and products of NASA’s ECOSTRESS mission (ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station) has been implemented. This data access is part of the European Ecostress HUB (EEH), a project funded by ESA in support of the Copernicus Land Surface Temperature Monitoring (LSTM) High Priority Candidate Mission and implemented by the Luxembourg Institute of Science and Technology. At EEH, ECOSTRESS data is searchable for user defined area of interest and start/end dates. Datasets cover seven products, level 1 (RAD, GEO), level 2 (LSTE and Cloud Mask), level 3 (ET) and level 4 (WUE, ESI). The EEH data repository build the basis to test and develop dedicated land surface temperature (LST) and evapotranspiration (ET) retrievals with user exchangeable algorithms and auxiliary data on a cloud platform in preparation of the LSTM mission. Within the EEH project, the Temperature Emissivity Separation (TES) algorithm and the Generalized Split Window (GSW) for LST and three different models (STIC, TSEB and SEBS) for ET will be implemented and made available through the Food Security-TEP.
In 2022, further enhancements to the platform services are planned. While the final decision on individual enhancements is not made yet, these evolutions will also focus on bringing new functionalities (e.g. enhanced API, ability to use GPUs) and new data sets (e.g. hyperspectral data, agricultural training data sets for AI, scientific results like global water use efficiency) to the platform, to open up even more agricultural applications and allow users to built their own services from a unique set of tools and data sets related to food security.
The Food Security TEP evolutions are funded by ESA under Contract No. 4000120074/17/I-EF. Integration of the ECOSTRESS data integration was sponsored by ESA through the NoR.
Global Earth Monitor (GEM; funded by H2020) takes advantage of the large volumes of available EO and non-EO data to establish economically viable continuous monitoring of the Earth, driven by the dynamic transition between "strip mode" and "spot mode" monitoring. GEM’s approach is based on the drill down mechanism: fast (and cheap) global processing at low spatial resolution, finding the areas of interest (AOI) where it triggers spot monitoring with (appropriately) high spatial resolution data and more elaborate machine learning (ML) models. Such processes can run continuously on a monthly, weekly, or even daily basis provided they work in a sustainable way - adding more value than their cost - at least on a continental if not global scale, able to automatically improve accuracy and detect changes as they occur.
The GEM consortium is formed by Sinergise (the coordinator), one of the key enablers of the uptake of Copernicus data through its well-known Sentinel Hub services; the European Union Satellite Centre (SatCen), one of the main European Institution of the Space and Security domain; meteoblue, a first-class weather services provider offering weather predictions at global scale on scales not familiar before from other weather services; TomTom, a well-recognized industry leader in location technologies; the Technische Universität Munchen (TUM), a research institution playing a vital role in Europe’s technological leadership. Each partner is in charge of implementing one use case, while TUM has a transversal role to support the development of AI tools relevant for the different use cases activity.
Long temporal series over very small areas (e.g., agricultural fields) and large scale (global) mosaics over shorter time stacks are two orthogonal use-cases when striving for efficient EO data retrieval. The concept of adjustable Data Cubes (aDC) addresses the two: a service capable of preparing the data in a way that the user needs it in her downstream pipelines and applications in a scalable and cost-effective way. At GEM, Sentinel Hub services are trying to address precisely that: cover both (corner) cases of data retrieval from the perspective of scalable and cost-optimised infrastructure. When coupled with the available data collections, the advantage of adjustable data cubes and analysis ready data (ARD) processing chains is enormous. Users can delegate the heavy machinery and processing of complex calculations (see e.g., custom scripts repository [1]) of large scale (mosaic) processing to the BatchAPI and feed the results into their own pipelines. The StatisticalAPI, and the upcoming Batch Statistical API, are preparing the aDC ARD for the other extreme: fast retrieval of long time series statistical variables (mean, min, max, std, percentiles, histograms) over AOIs, allowing for e.g., development of vegetation index time series of an agriculture parcel.
The EO industry seems to be evolving into two distinct branches: “we provide/sell data and data products” or “we provide a platform where users can build their own bespoke products”. GEM project tries to balance the two, leveraging the access to the data through the services, and providing users with open-source ways to build their own products using open-source eo-learn [2].
The development of scalable and cost-effective solutions is being tested on several use-cases. Built-up area use case identifies new built-up areas using the drill-down method. It exploits the Global Mosaic ARD cube of Sentinel-2 data at 120 m resolution as a starting point and, after fast detection of built-up areas at that resolution, runs the process at 10 m resolution to classify artificial surfaces and to detect changes. At that point, very high-resolution imagery is used to detect buildings. A Conflict Pre-Warning (CPW) Map use-case will provide a new security product to support decision-making. It analyses correlations between global climate changes and environmental issues with human activity behaviours, in support to guaranteeing the security of citizens. Automatic crop identification uses a combination of EO and weather data to enable automatic identification of crop growth stage. The use case supports operational decisions when managing crops and the quantitative monitoring of actual vs. planned or reported land use (production forecast). Map-Making Support use-case will integrate Land Cover services to perform a fully automated and repeatable global land cover mapping for small-mid scale and optimised land cover map at large scale (change detection functionality).
Within all use-cases we make use of the big data functionalities of Sentinel Hub for two purposes. Firstly, to showcase the capabilities of the increased performance, cost-effectiveness and scalability of services and framework for continuous monitoring using drill-down mechanism. And secondly, to demonstrate the adoption of use-case results for the decision making within industrial (e.g., map making), societal (e.g., conflict pre-warning maps) and other domains (e.g., crop identification for common agricultural policy).
The this talk we will provide an overview of the tools and use-cases developed within GEM, showcasing the bigdata capabilities of the services and their integration with eo-learn.
Earth observation (EO) data cubes have removed many obstacles to accessing data and deploying algorithms at scale. Sill, developing algorithms or training machine-learning models is time-consuming, usually limited in the application scope, and requires users to have many skills, including EO and technology-related ones. By integrating semantically enriched EO data cubes (i.e., semantic EO data cubes) and graphical semantic querying, we aim to remove this burden from users.
A semantic EO data cube is an EO data cube, where for each observation at least one nominal (i.e., categorical) interpretation is available and can be queried in the same instance. Such an interpretation can be general-purpose, user- and application-independent information layers derived as spectral categories from the reflectance values of optical EO images. The spectral categories are considered as colour properties of land cover classes (i.e., semantic entities) and only the first and initial step towards a scene classification map, whose classes require adding more information, e.g., the temporal variation, texture, or topographic features. Thus, a graphical Web application, specifically designed to allow semantic analysis of EO data, allows encoding a-priori knowledge using transferrable, replicable, and explainable semantic models to produce information in a convergence-of-evidence-approach.
At the core of our approach is semantic enrichment to generate data-derived spectral categories, scalable cloud-based technology for data management, graphical semantic models to formulate rule-based queries as close to domain language as possible, and an inference language capable of processing semantic queries by translating semantic models into execution steps.
We implemented our architecture as a scalable infrastructure for spatio-temporal semantic analyses of Sentinel-2 for Austria. The semantic enrichment is conducted using the SIAM software (Satellite Image Automated Mapper) in our implementation. SIAM outputs information layers based on a knowledge-based and physical-model-based decision tree that can be executed fully automated, is applicable worldwide and does not require any samples. The input is any multispectral EO image that was calibrated at least to top-of-atmosphere reflectance. Users can formulate semantic concepts using the spectral categories as colour information in a graphical convergence-of-evidence approach. The semantic EO data cube provides access to the images and information layers and is based on a scalable, containerised instantiation of the Open Data Cube (ODC). The ODC provides the Python application programming interface (API) that is used by the inference engine to obtain the data to conduct inferences by applying the user-defined semantic models.
Our infrastructure facilitates analyses and queries, including semantic content-based image retrieval (a method to filter images based on their content), on-demand parcel- and location-based analysis using semantic categories and optionally excluding clouds, or composites with custom best-pixel-selection (e.g., cloud-free, (non-)vegetated, (non-)flooded) in user-defined area-of-interest and time intervals.
The amount of available EO Data is constantly growing over time. Only considering Copernicus missions, the volume of data has reached 30PB+ and the number of individual products 45M+ in 2021 (source: https://scihub.copernicus.eu). Processing this volume of EO Data requires Well-Architected software platforms, with a primary focus on scalability. Cloud-based environments and technologies are enablers for these kinds of platforms.
EOPaaS is a Platform as a Service for Earth Observation, enabling its users to process data at scale. It is built on top of a Cloud-native microservices architecture that can run on top of a variety of public and private Cloud infrastructures, such as AWS, GCP, Azure, OVH, CREODIAS, ONDA DIAS. The main strengths and peculiarities of EOPaaS are:
- To be Cloud-agnostic, being designed to rely on a well-defined set of standard Cloud APIs, it allows EOPaaS to run on any Cloud provider exposing standard interfaces
- To be Data-agnostic, with the possibility to process and publish raster and vectorial data in different formats from different satellites (e.g., imagery and video)
- To be algorithm-agnostic, being able to integrate any processor by the availability of its container logic and APIs
These features made the platform able to serve a continuously growing demand for processing capacity and also, thanks to the exploitation of cloud-native microservices architecture the platform optimises costs and resource usage when the demand slows down, leveraging well-known products like Argo Workflows as Workflow Orchestrator and the Cluster Autoscaler.
EOPaaS currently supports several ESA-funded initiatives addressing challenges that have been identified by different communities.
EOPaaS was initially developed for the Food-security TEP, built on top of the heritage of the Forestry TEP and enhances it with additional functions such as auto-scaling as well as the capability of performing orchestrated workflows for large data processing. It also provides further development of the user interface, in particular for users accessing via mobile devices. Recently the platform was enhanced for the community by a federation with the AI platform “Hopsworks” to address AI challenges in the Food Security domain, also the platform dataset was enlarged with the ECOSTRESS Dataset to be offered to the scientific community.
In the context of the OGEO-REP (Oil and Gas Industry Earth Observation Response Portal), EOPaaS provides a new concept of event-driven processing to enable bundles of processing services configured together with a few common parameters in response to an event to support the needs of both onshore and offshore Oil Spill Response users. EOPaaS was enhanced with a new component aiming to provide to the end-users a common operating picture, analyzing together heterogeneous data sources coming from different services.
In the context of the Vantage (Video Exploitation Platform) project developed the platform was offered for exchanging data tools and algorithms around exploitation and visualization of EO video data generated by Earth-I.
In the BULPP (Parallel Computing using GPUs) project EOPaaS aims to demonstrate a bulk processing framework based on parallel computing, considering the scalability and portability of the parallel processing infrastructure, the computational cost (focusing on the computationally most demanding processing steps), and variety: different sensor types (optical, SAR), but also different processing levels (low-level processing vs. complex analytics).
EOPaaS also support projects for private companies in the Energy and Manufacturing sectors and keeps evolving towards the EO Exploitation Platform Common Architecture guidelines (https://eoepca.github.io/), and already supporting standard OGC APIs (OpenSearch, WPS 2.0).
The objective of the presentation will be to describe the different challenges that the platform has faced during the different projects and how these have been addressed in order to achieve end-user goals.
Every two years the Satellite Needs Working Group (SNWG, an initiative of the U.S. Group on Earth Observations, USGEO, mandated by the White House’s Office of Management and Budget) performs a survey among the US federal agencies to identify the most needed remote sensing observations to support their highest-priority activities. NASA supports the SNWG by developing remote sensing products that address data gaps identified by the SNWG. In response to the requirements identified by the SNWG 2018 cycle, the JPL OPERA (Observational Products for End-Users from Remote Sensing Analysis) project has been funded to develop and implement three products: (1) a near-Global Dynamic Surface Water Extent Product from optical and Synthetic Aperture Radar (SAR) data; (2) a near-Global Land Surface Change Product from optical data; and (3) a North America Land Surface Deformation Product from SAR data. The source of the optical data is the harmonized Landsat-8 and Sentinel-2A/B (HLS) satellite products. The source of the radar data comprises Sentinel-1 A/B, NISAR, and SWOT data products. In addition to the three output products identified by SNWG, two intermediate products will also be produced: (1) a North America Land coregistered Single Look Complex (CSLC) stack product for all interferometric radar data and (2) a near-Global land surface Radiometric Terrain Corrected (RTC) product derived from the SAR data. OPERA’s current scope of work provides operational funding until the end of FY 2025, with the various products delivered to and distributed by three NASA Distributed Active Archive Centers (DAACs).
In this presentation we will discuss the planned characteristics of all OPERA products. We will also provide information on processing and product calibration/validation activities, introduce the OPERA Stakeholder Engagement Program, and summarize the timeline of product development, cal/val, and release. Special focus will be put on the optical water and optical disturbance products aimed to be released to the community by March 2023 and September 2023, respectively.
The CODE-DE (Copernicus Data and Exploitation Platform – Germany) cloud is designed to specifically fulfill the usage demands for public authorities. This means explicitly a well-structured web presence with a low complexity entry level, high usability and user friendliness. This involves data browse and view services, but also that registered users can trigger simply by mouse click pre implemented demand
web-based data processing trees, such as Sentnel-1 interferometry or a biophysical data processing flow for Senentinel-2 (Leaf Area Index, Fractional Vegetation Cover and FAPAR) form SNAP. There are other ready to use data products for the user’s convenience available, such as monthly composites of Sentinel-1 and 2 imagery. All Copernicus data for Germany are available locally on the servers in Frankfurt (Germany), with a global data access via the CreoDIAS data catalog, access to all Copernicus services and the Copernicus Contributing Missions (CCM).
CODE-DE is a hybrid cloud that allows web-access to the data for viewing and download, but at the same time data processing facilities via virtual working environments and Jupyter Lab. Processing data locally in the cloud is a great benefit for a large number of CODE-DE users ( “bringing the user to the data”) as there is a general lack of in-house computing power for a number of public authorities. IT security standards of ISO 27001 and C5 are meet by CODE-DE as a perquisite for this user group. The data services include two different data cube concepts and lately a suite of Graphical Processing Units (GPU) for Artificial Intelligence (AI) applications, deep learning and computer vision were implemented (Infrastructure-as-a-Service).
CODE-DE is free to use for national public authorities and part of the national Copernicus strategy. The intention of running this platform is also to increase the Copernicus data user uptake, foster the use of Copernicus data and implement downstream services. The usage of the processing facilities is quota based and also open to other users through ESA’s Network of Resources (NoR) via cloud elasticity and dynamic resource allocation.
CGI has recognized that Oil Spill Detection and Monitoring is an essential service for the Oil and Gas industry, and has worked with ESA to develop an easy-to-use cloud-based near-real-time service, with the capability of big EO data analysis.
In recent years, the need for real-time incident satellite imagery has grown within the oil and gas industry. Following the Gulf of Mexico (Deepwater Horizon) oil spill, it became clear that access to near-real-time satellite data, tasked within hours of an incident and used to inform critical decision-making, could have far-reaching impacts for oil & gas operators in their response to oil spill incidents.
However, effective use of satellite imagery to help drive response decisions has faced a unique challenge in the time it takes from collecting the first image of an area of interest, to a revisit pass of the same area of interest (16 days in the case of Landsat). This was found to be exacerbated in locations that are closer to the equator than to the poles.
The solution is therefore access to more satellites, resulting in non-reliance on a single provider to meet standard needs. This demand for better access to mission-critical Earth Observation (EO) data has led to the ‘Expand Demand Oil and Gas’ project, created by CGI and funded by the European Space Agency (ESA).
This project, which started with ESA in 2018, has had two key high-level requirements:
1. To meet specific operational requirements of the oil and gas industry, established by a steering board of leading Oil and Gas companies;
2. To establish generic EO capabilities within the oil and gas industry and showcase the capabilities of the Sentinel satellite missions and the European EO service industry.
The benefit of this project to its end-users is a service delivering relevant near-real-time EO data. This is provided by a dedicated portal, where information relating to a specific spill incident is gathered and presented in a clear and systematic way to provide a common operative picture to the different stakeholders to support the decision making process. The information provided covers:
a) A timeline of available products from a range of providers.
b) Predictions of future availability of products.
c) Actual products wherever possible
d) Derived services such as oil spill extent mapping.
To meet this challenge, the Oil and Gas Industry Earth Observation Response Platform (OGEO - ReP) has been developed. This platform assists oil spill responses by gathering, processing and displaying a wide range of relevant EO data, including:
a) Satellite data products from a wide range of sources (free and commercial)
b) Predicted acquisitions relevant to the incident
c) Derived products, e.g. Oil spill extent mapping, processed as a hosted service
d) Contextual background information, such as asset locations
The platform solves a clear and present by providing a one-stop-shop for all EO data relevant in a spill incident. It provides access to satellite data products (including predictions of timelines for future acquisitions) from a range of providers via an online portal. This data can then be ordered from the platform as standalone images or with the attributed metadata.
The scalability of the platform allows it to process large amounts of data in a spill event, allowing for the inclusion of swath prediction (to identify potential acquisitions of interest), the mapping of a spill event, and running of oil spill drift models to forecast the behaviour of the spill. This is all presented via an intuitive graphical user interface (GUI) or via an API (based on OGC standards), for integration into customer business processes.
Framed in the improvement of our understanding of the aboveground terrestrial carbon dynamics from EO data, the development of the (Multi-Mission Algorithm and Analysis Platform) MAAP aims to foster the scientific research. The MAAP provides a framework to facilitate the exploitation, analysis, sharing and visualization of massive Earth Observation (EO) datasets and high-level products.
ESA MAAP is an ESA funded project, built by Capgemini with Sistema and CGI Italy as sub-contractors.
The MAAP offers a common platform with computing capabilities co-located with data as well as a set of tools and algorithms developed to support this specific field of research. In addition, the MAAP maximises the exploitation of (EO) data from different missions: ESA BIOMASS, GEDI and NISAR NASA missions. Supporting scientific research and collaboration, the MAAP addresses crucial issues related to increased data rates and reinforces open data policies.
The MAAP presents a set of functions to deal with EO sciences missions and meet their scientific community requirements. This platform:
• Facilitates the discovery, access, visualization and analysis of various sets of EO data, from both ESA and NASA, through a catalogue that offers a centralized and standardized access to various EO datasets such as ESA and NASA EO missions, in situ measures or airborne campaign data, that are hosted on the platform or not. The access to the data is completed by advanced front-end allowing to visualize and analyze 1D, 2D and 3D datasets
• Provides a communal code/algorithm development platform with processing resources, for algorithm developers and scientists related to ESA and NASA. The MAAP provides registered users with a complete cloud native Eclipse/Jupyter environment with Gitlab code repository and continuous integration capabilities. Working on the MAAP, users are provided with cloud storage and computing resources, allowing rapid benchmarking of processing algorithm, but also the creation of standardized and customizable development environment, allowing to easily setup a set of workspaces for a dedicated event such as a training course. The MAAP cloud native solution is scalable and cost effective, adapting to the number of users.
• Offers a processing function dedicated to the computing at scale with a fully automated data processing framework allowing generation of product, able to deal with the processing of huge amount of heterogenous data. COPA is an open-source solution developed by Capgemini, a generic platform allowing scientific or operational community to easily integrate and run algorithm workflows with enough performance to carry out global and real-time studies. COPA enables to manage the sequence or parallel processing and orchestrates them in a distributed and scalable environment. Being an open and generic platform where the processing chains can be easily exchanged, the flexibility of COPA platform allows public and private stakeholder to develop applications from a wide range of space observation topics: forest, biomass, agriculture, natural disaster, emissions…
COPA integrates algorithms as “Docker Images”, thus making it independent from technologies used to implement algorithms.
• Federates scientific community by fostering a spirit of resource and knowledge sharing on common thematic thanks to a set of collaborative tools, such as a forum and a collaborative help section.
The MAAP will participate improving science by exposing the official processing algorithm to every user and making them fully transparent Relying on open data policies, the MAAP enables and eases data and algorithm sharing between users of the platform from both agencies and with external users.
• Enables interoperability of data and services between ESA and NASA, relying on OGC standards and innovations, with an ongoing roadmap.
The MAAP platform is based on opensource libraries and components and hosts data and algorithms with open policies.
The main functions which compose the MAAP are:
• Algorithm Development Environment which enables scientific community to develop the algorithms
• Catalogue for data discovery and access including visualization tool
• Processing platform for data processing at scale
• Collaboration and information sharing functions which deal with all aspects related to data, information sharing based on the access rights that could be defined at user or community level
• Monitoring of the platform good health, the system resources, the egresses, and also monitoring of user’s usage. The monitoring contributes to keep a cost-effective approach for scalable ICT resources capitalizing on economy of scale through infrastructure pooling.
Those functions rely on a dedicated scalable cloud infrastructure and are complemented by other functions such as interoperability of data and services between both NASA and ESA users, security that guarantees the data and services access to allowed users.
The MAAP architecture as depicted below fulfills the user’s requirements.
The MAAP architecture is composed of the following different levels:
• Client side or Front-End that hosts the HMI (Human Machine Interface) and the data discovery and visualization system. This architectural layer oversees the presentation layer by giving access both to microservices deployed on the service layer and the algorithm development platform based on Eclipse CHE and Jupyter
• Back-End that hosts MAAP Services, including data access component. This architecture layer implements MAAP microservices exposed as REST API and consumed by the presentation layer.
• IaaS (Infrastructure as a Services), this layer provides a standardized way to use IT resources for data storage and computing.
• Security: This level integrates the User Management system in order to manage access to the portlets according to the defined access rights and the API management service
• Governance: this level addresses tools and methods for MAAP operations including monitoring and supervision.
Based on opensource and Cloud Native technologies, the MAAP is deployed on Orange Public Cloud and could be deployed on any Cloud infrastructure that provides Kubernetes Cluster. This choice enables the platform auto scaling. The following diagram presents the physical architecture of the MAAP.
The DYDAS project was aimed at developing a collaborative platform for offering data, algorithms, processing and analysis services to a large number of users from different public and private user communities. The platform will act as an e-marketplace enabling transactions for accessing data and added value services enabled by HPC and based on Big Data technologies, machine learning, AI and advanced data analytics, with the purpose to match demand and offer among those who own intellectual properties on data/methods for their use and those who need or want to exploit them.
In line with the objective of the CEF 2018 work programme and the CEF-T-5 call, the project contributes to the European data infrastructure by improving the sharing and re-use of public and private data. By enabling the use of dynamic data sets such as Earth observation satellite, in situ data from environmental monitoring networks and vehicle data, promoting HPC-based R&D through an integrated research laboratory and scientific knowledge and collaboration system, offering easy-to-use HPC-based services and tools, through specialised interfaces, and designed to provide different user experiences to a wide range of users. A key and differentiating element of the project will be the implementation of a Geospatial Data architecture connected with a dedicated Data Lake and an HPC processing framework. This specific components through the adoption of a geospatial data model and interoperability rules, allow seamless integration and processing capabilities of extremely large data sets for innovative use modes. Furthermore a large ensemble of dataset connectors are available to facilitate the Machine-to-Machine (M2M) acquisition of several datasets such as Copernicus Satellite data or Copernicus Services as well as for other kind of satellite data. In addition, DYDAS promotes the sharing and re-use of public and private data in a secure environment and through innovative monetisation mechanisms. This collaborative platform will act as an e-marketplace for data access, but as added value it will be equipped with HPC-enabled services based on Big Data technologies, machine learning, AI and advanced services. The project has tested the data analysis capabilities of the platform through the integration and operation of various use cases which relevant results will be presented.
With the ongoing proliferation of open-access but also commercial satellite imagery, from both optical and synthetic aperture radar (SAR) sensors, the number of downstream applications is rapidly growing. Developers of Earth Observation (EO)-based products and services, as well as expert and non-expert users of such tools, thus need access to a cloud computing infrastructure offering interoperable analysis functionality. Here, we present the versatility of such a cloud-based infrastructure called WASDI. WASDI, a web-advanced space development interface, is an online platform where EO experts can develop and deploy applications (apps) and end-users can employ them to process satellite images on, demand to generate value-added products, services, and solutions.
The idea is very simple: turn EO data into actionable information for as many end-user segments as possible, while leaving the end-user in control of execution. This is implemented in WASDI by integrating a robust online cloud computing infrastructure, interoperable machine-to-machine use, and scientifically complex EO algorithms with an easy-to-use developer and user environment platform online.
This setup is powerful since it allows EO experts and EO application users to be close to each other by using the same platform but operate in an environment of their own choosing. On the one hand, experts can develop an EO application using their development environment of choice and the programming language they prefer, control the cloud behavior with their code, and then just simply drag and drop it onto WASDI to deploy it in the cloud for free, with the aim to scale up using the offered cloud computing and marketplace services. On the other hand, end-users can use these technically sophisticated applications to transform EO data into actionable products, services, and solutions, with the click of a button and in a few very simple steps.
The WASDI marketplace is directly comparable to the very popular smart phone app stores and offers a growing body of free-to-use and payable applications, ranging from basic remote sensing indices, such as the Normalized Difference Vegetation Index (NDVI), to more complex applications, such as burnt area mapping or flood hazard mapping. The marketplace is being enriched with new applications to address as many end-user needs as possible, so that the true power of EO can be unlocked as quickly as possible and in a democratized manner.
INTRODUCTION
The digital revolution is expanding the added value of the Earth Observation applications. This is offering unprecedented opportunities to research, industry, and institutions to tackle diverse global and regional challenges.
Despite this, data preparation, processing, analysis, and visualization tasks are not free from challenges. These tasks require time, expertise, and IT resources to be managed. The EO community increasingly needs flexible EO solutions preventing them from having to deal with the complexity of building and maintaining their own data infrastructure. This would allow them to focus on quality research and achieve their research goals faster. In this regard, the ESA Research and Service Support (RSS) service [1] paved the way by successfully implementing the “bring user to data” paradigm as well as demonstrating the concepts of Virtual Lab for Education [2], Virtual Research Environment [3] and Data Valorisation Environment [4].
To overcome the more and more complex challenges emerging in the EO domain, we present EarthConsole® (www.earthconsole.eu), a cloud-based platform inspired by the strengths of the RSS model whose objective is to facilitate the exploitation of EO data. To achieve this goal, EarthConsole® provides a unified solution to access data, develop and test algorithms, run scalable processing campaigns and analyze their results.
EARTHCONSOLE®
EarthConsole® is a set of three complementary support services: G-BOX (Integrated Algorithm Development and Execution Environment), P-PRO (Parallel Processing Service) and I-APP (Application Integration Service).
G-BOX offers a cloud-deployed virtual machine (VM) suitable for algorithm development and testing, based on two Linux OS distributions templates. The VM allows for a fast access to the dataset offered by the Data and Information Access Services – DIAS, relieving the users from the costly remote download of data.
The main goal of G-BOX is to provide EO data users with the needed resource flexibility to easily perform their own processing. Therefore, the cloud virtual machine can be accessed either via command line, through a remote desktop client, or via web browser to create and edit Jupyter Notebooks. The virtual machine comes with pre-installed packages and software supporting EO data exploitation to reduce the configuration burden on users: SNAP, QGIS, R, BRAT, and JupyterHub for quick data analysis and visualization. Additional software can be installed on request.
In addition, G-BOX offers a flexible amount of CPUs, RAM and dedicated storage tailored to users’ requirements, that can be upgraded compatibly with the cloud infrastructure constraints. The VMs are also available in a multi-user mode to share code and data in a common development environment while keeping individual workspaces. The VM infrastructure also enables configuration of tens of virtual machines with the same settings, which makes it an ideal solution for training purposes.
P-PRO enables the users to perform scalable processing campaigns on huge datasets. It offers a High-Performance Computing environment optimized for the execution of EO data-intensive applications, based on cloud computing and distributed systems technologies. Following a set of guidelines, custom EO algorithms can be integrated into the platform which will automatically parallelize and distribute the application batch processing operations among the computing cluster resources.
P-PRO relies on a centralized orchestrator, the parallelizer engine, to partition the application input data and operations into smaller tasks and to distribute them over a set of computing nodes where they will be executed in parallel. Once all the tasks have been completed, the orchestrator gathers the results of the parallel computation and makes them available to the user. The technology adopted for the Computing Cluster management is based on the SLURM Workload Manager [5], which is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters.
The cluster resources are managed by a Cloud Scaling Engine capable of automatically adapting the amount of computing resources based on the workload of the cluster. In this way the platform ensures that the parallelization effort is always matched with an adequate amount of computing power.
The computing cluster can be deployed on any OpenStack-based cloud platform. To lower the barrier towards EO data access and storage, P-PRO resources are deployed on the DIAS infrastructure, which provides access to their EO data catalogue locally from their cloud resources.
With the parallelization of the computation, the flexibility of the amount of cloud resources, the local access to EO data and the ease of custom application integration in the platform, P-PRO seamlessly brings its users close to the optimized computing power required to execute their processing operations and to the necessary features to speed up their work. As a matter of fact, the benefits are both on the user side as well as on the operations side, since the automatic scalability of the cluster allows for an optimal resources utilization resulting in time and cost savings also from the platform administration perspective [6].
Work is ongoing to finalize the implementation of the on-demand version of the P-PRO service. With this new flexible delivery model, users will be able to configure, launch and monitor small processing tasks through a user-friendly web interface. The P-PRO on-demand portal has already been opened to a selected group of beta testers and it is planned to be launched officially by the end of 2021.
The I-APP service offers expert support to users to integrate their EO application or a third-party application in the P-PRO environment for processing. EarthConsole® operators will support users who do not have the necessary time, expertise, or IT resources knowledge to adapt their applications for integration into the EarthConsole® Parallel Processing (P-PRO) environment.
Currently, several processors are already available in the P-PRO Catalogue of Processors: the SARvatore for Cryosat-2, SARvatore for Sentinel-3, SARINvatore for Cryosat-2, ALES+ SAR Retracker – TUM, TUDaBO SAR-RDSAR – U.Bonn; Fully Focused SAR for CryoSat-2 - Aresys, Sentinel-1 Amplitude Change (SNAC), Coherence and Intensity change for Sentinel-1 (COIN), and Sen2Cor for Sentinel-2. Additional processors are being integrated and will be available in a short time [7].
EarthConsole® is also available via the ESA EO Network of Resources (NoR) as a Platform Service: customised VMs (G-BOX) and ad-hoc processing services (P-PRO) are offered to scientific, educational, and pre-commercial users.
CONCLUSIONS AND FUTURE DEVELOPMENTS
The need for flexible EO data exploitation solutions, able to reduce the burden of infrastructure management on users, is a shared issue within the EO community nowadays. Progressive Systems has implemented the EarthConsole®, a cloud-based platform characterized by a high-degree of flexibility, high-speed access to EO data, strong computing power and ready-to-use tools for EO data analysis and visualization, enabling users to shorten, and in some cases eliminate, the time dedicated to data preparation, processing, visualization, and infrastructure management.
Feedback for the improvement of EarthConsole® is currently being collected from selected groups of stakeholders to deliver services which are more and more centered on users’ needs.
In the frame of the Quality Assurance Framework for EO project, a user’s group is currently assessing the validity of EarthConsole® to fully exploit its potential in response to the Cal/Val community's needs.
P-PRO ON DEMAND is in beta testing phase by a group of stakeholders from the ESA altimetry community.
In addition, EarthConsole® operators are currently developing EarthConsole® Virtual Labs: virtual spaces designed on the needs of specific communities of EO data users to access customized EarthConsole® services, and tools to network and share information with colleagues working in the same domain, all from a single environment. The first of these labs will be dedicated to the Altimetry community and is currently being developed under a programme of, and funded by ESA.
The work presented is just the beginning of the evolution of EarthConsole®, whose services will be enriched over time based on customers’ emerging requirements.
REFERENCES
[1] P. G. Marchetti, G. Rivolta, S. D’Elia, J. Farres, G. Mason, N. Gobron (2012) “A Model for the Scientific Exploitation of Earth Observation Missions: The ESA Research and Service Support”, IEEE Geoscience and Remote Sensing (162): 10-18, 2012
[2] F.S. Marzano, M. Montopoli, S. Leonardi, G. Rivolta (2016)
“Data Science Master’s Degree at Sapienza University of Rome: Tightening the Links to Space Science”
Oral presentation - Presented at BiDS’16 - Santa Cruz de Tenerife
[3] P. Sacramento, G. Rivolta, J. Van Bemmelen (2018)
“ESA’s Research and Service Support as a Virtual Research Environment for Heritage Mission data valorisation”
PV2018: Proceedings of the conference
[4] P. Sacramento, G. Rivolta, J. Van Bemmelen (2019)
“Towards a Heritage Mission Valorisation Environment”
Poster 26 - Presented at BiDS’19 - Munich
[5] https://slurm.schedmd.com
[6] R. Pascale, R. Cuccu, G. Sabatino, G. Rivolta, M. Iesué (2020). “P-PRO – The EarthConsole Parallel Processing Service” Poster presentation - Presented at the ESA EO Φ-WEEK 2020.
[7] M. Iesué, C. Orrù, G. Sabatino, R. Pascale, R. Cuccu, G. Rivolta (2021) “EarthConsole: a cloud-based platform for earth observation data analysis” Poster presentation - Presented at the ESA EO Φ-WEEK 2021.
Great cloud solutions for the Earth Observation are gaining adoption, successfully exploiting the scaling potential of solutions based on standards like those provided by the Open Geospatial Consortium, OGC. These solutions exist as standalone and federated architectures implementations. Explored as federations, we now experience a growing efficiency in integrating data products shared by various data providers. Maturing solutions not only streamline daily routines but even allow secure data exchange for new experiments and scenarios. With more and more technologies moving to the cloud, access, replication, and handling of geospatial data are experienced at a new level. Just, we are still at the beginning of this amazing transfer from heavy SOAP-based web services towards lightweight and cost-efficient modern service portfolios. Various research aspects need to be addressed to further enhance and establish interoperability within the growing landscape of data offerings and processing capacities. Collaborative research efforts are best suited to address these interoperability challenges, as they include a variety of players natively. The OGC Testbed series belongs to these types of research and development efforts. Conducted yearly, these testbeds bring a broad community of geospatial experts together with sponsoring organizations to address current needs and to boost infrastructure developments to allow for excellent thematic research and business opportunities. The OGC Testbed-17 is the latest Testbed, which addressed several essential research topics in the context of EO data downstream and processing standardisation. These include data security, new data formats, and unified data cube APIs that were effectively exercised to propose technological advancements.
Firstly, organizations have invested significant resources in Geospatial Data Cubes. The infrastructure shall support the storage and use of multidimensional geospatial data in a structured way. Geospatial Data Cubes (GDCs) are already addressing specific needs with the solutions built upon them. However, challenges remain to enable broad access, limiting their ability to support widespread use. The OGC Testbed-17 explored the use of the Web APIs that can support GDCs in a uniform, standardized way from multiple environments (e.g., other GDCs, platforms, various file systems, etc.). The support of discovery, access, sharing, and use of GDC data should enable workflows involving distributed computing resources and algorithms.
Secondly, as organizations move to the cloud and data sharing between clouds and cloud services is growing, it is essential to incorporate Data Centric Security (DCS) into the design of the emerging cloud infrastructures. That enables the use of cloud computing even for sensitive geospatial data sets. The applicability of Zero Trust through a DCS approach was applied to vector and binary geospatial data sets (Maps, Tiles, GeoPackage containers) and OGC APIs. The results have shown the potential of the standards and their extensions.
To boost performance, data transfer reduction between processing hubs is essential. In this context, Testbed-17 explored the usage and value of Cloud Optimized GeoTIFF and Zarr for raster data. In the case of remote sensing big data, slicing the data and efficiently serving chunks is a key to efficiency. For the planned OGC standardisation process, the work focused on Zarr relevance for EO data, taking other OGC standards, business values, and synergy with easy-to-use remote sensing data catalogues such as SpatioTemporal Asset Catalogs (STAC, https://stacspec.org/) and OGC APIs (Features, Tiles, Maps) into account.
This presentation will provide an overview of key results from Testbed-17, show the paths towards cloud-native processing, and outline important experiences and lessons learned.
The Charter is a worldwide collaboration, through which satellite data are made available for the benefit of disaster management. By combining Earth observation assets from different space agencies, the Charter allows resources and expertise to be coordinated for rapid response to major disaster situations; thereby helping civil protection authorities and the international humanitarian community.
Terradue was selected to design and operate a new online service to visualize and manipulate the satellite acquisitions at full resolution. After several months of development, in September 2021 a new portal named ESA Charter Mapper, was officially opened to support Charter operations and in particular the product screening activities. Behind the portal is deployed a cloud native platform integrating the latest state of the art technologies for a seamless visualization and manipulation of the satellite imagery directly from a web browser.
Product screening requires the production of high-resolution RGB composites from a constellation of some forty satellites from fifteenSpace Agencies with many different metadata and data formats. This first challenge is tackled with a metadata harmonization for all missions. During the development phase, a promising common metadata language dedicated to SpatioTemporal Assets Catalog (STAC) was emerging and the chance was taken to use it extensively. STAC provides an abstraction layer and reduces the EO data heterogeneity defining a synthetic interface to data highlighting the concept of assets supporting cloud media types (e.g. geotiff, tiff, binary, jpeg2000) in single or multi-band enclosures. The Charter Mapper development contributed directly to the new standard by providing useful extensions to manage processing lineage or raster information.
At this stage, the multi-mission product screening is the main use case of the Charter Mapper and thus requires the pre-processing of the satellite acquisitions to have Analysis Ready Datasets to allow an homogeneous visualization and ready to process data of the different satellite imagery and the usage of downstream and value adding services such as Flood or Burned area delineation and intensity, Active fires detection, Lava flows identification, Ground Change detection or yet Interferometry. Basically, each remote sensing scene is calibrated to a common processing level. Optical datasets are transformed to Top-of-Atmosphere or surface reflectance values for each of the spectral bands and SAR datasets are converted to Sigma0 backscatter values in all available polarizations. The resulting rasters are then saved in a Cloud Optimised GeoTiff (COG), a format that eases the remote access of data chunks. Combined with S3 Object Storage, they offer a fully ranged data access allowing COG-aware software to stream only the portion of data that it needs, improving processing times and creating real-time workflows previously not possible.
Disasters are unfortunately unpredictable and the call activations occur according to the natural hazards. Therefore, subsequent satellite acquisitions will eventually flow in the system in a very fluctuating way as well as the usage of the service by the project managers. All system components must be able to scale according to the processing load at every stage: download, harvesting, pre-processing and synthetic/systematic product. Every software unit is packaged as containers and deployed on a Kubernetes infrastructure managed by Helm charts and supervised by a Continuous Deployment tool. Combined technologies ensure reliable and efficient maintenance and operations of the whole system. Indeed, the software upgrades are performed without downtime and transparently for the user.
This project demonstrates the maturity of the previously listed technologies to implement a comprehensive cloud native platform with optimized data access and processing for an operational usage of satellite imagery. This is a fundamental basis for rapid response in the context of the Disasters Charter.
Abstract
Pursuing Copernicus data and Service exploitation requires innovative application of ICT solutions to address the related “Big Data” issues involved in both EO and non-EO (such as meteorological data and data stemming from social media) data processing characterized by the Big Data four Vs: volume, velocity, variety, veracity.
The last five years has seen a rapid succession of technologies supporting the development of cloud-based solutions, numerous something-as-a-service and architectures to guide and order our approaches with a goal to achieve consensus on how we implement platforms.
In this paper we present the evolution of the use of state-of-the-art architectures and technologies applied to the Automated Service Builder Framework, an application agnostic infrastructure-as-a-service for implementing complex processing chains over globally distributed processing and data resources designed to meet the EO paradigm change (“bring the user/algorithm to the data”). The evolution of ASB has been driven by 3 key projects: the Proba-V Mission Exploitation Platform - Third Party Services (MEP-TPS) project, completed 2019, the H2020 project EOPEN (Open interoperable platform for unified access and analysis of Earth Observation data, https://eopen-project.eu/) completed 2020 and the ESA project EOEPCA (EO Exploitation Platform Common Architecture) that is ongoing.
The ASB framework (https://www.spaceapplications.com/products/automated-service-builder-asb/) provides a set of functionalities needed to develop a service to support systematic processing of complex processes. It is fully Web based providing tools to import and containerize user defined algorithms, graphically edit workflow definitions by integrating built-in and imported processes, execute workflows with user-defined parameters, and access the results in user personal datastores. Generic workflow tasks are available to ingest the data in various kinds of databases and services. The workflow editor uses state-of-the-art solutions such as a customizable ontology-based to validate workflow definitions.
Generic and flexible orchestration
The platform generic and flexible orchestration capabilities mean that workflow tasks are orchestrated by the built-in Workflow Engine independent of the location of the actual executable files and independent of the underlying programming languages and related technologies. Within a given workflow each task may be deployed and executed in one of many platforms supported by ASB that a user selects using the workflow editor.
This brings the convenience of executing specific processes on the platform where the data is located.
Collaborative working
In the MEP-TPS project a need for collaborative working was introduced where actions users may perform on their resources (including processes and workflows) are assigned through workspaces. A decentralised management concept was developed allowing individual users to decide what they share and with whom. A shared process may be integrated by other users in their own workflows. A shared workflow may be selected and executed by the users that have the appropriate role in one of the workflow workspaces.
Scalable solutions
The EOPEN required a scalable solution exploiting the back offices providing services to downstream service providers and consultancy organizations performing Big Data analytics. EOPEN (https://eopen-project.eu/) includes the provision of an easy to use environment to implement user services and capabilities to fuse Sentinel data with multiple, heterogeneous and Big Data sources; additionally, the involvement of mature ICT solutions in the Earth Observation sector addresses major challenges in effectively handling and disseminating Copernicus-related information to the wider user community.
Cloud native features
In EOEPCA use of the common building blocks requires using Cloud-native features and services such as S3 services, the dynamic allocation of processing resources, and the ability to execute user defined functions in an environment controlled by the infrastructure provider. Such services are typically imagined and realized by one cloud provider and if their popularity increases (they become key features), variations of these appear in the competition. The ideal situation for the customers is when the specifications of a service is made public and adopted by other providers as it makes it possible to build portable applications that are not locked in a single environment.
A feature only supported by a single cloud provider will not be added in ASB unless there is a strong need for it. For EOEPCA ASB now uses S3 buckets for providing user personal but also shared datastores because this technology has been standardised and widely adopted.
Container orchestration tools
ASB had not been designed to be deployed in a Kubernetes cluster and thus does not have this dependency. However, because Kubernetes is now widespread, support for it is being added to the framework, as an alternative to the originally selected technologies based on Mesos and Marathon. To illustrate the decisions and trade-off's needed when adopting new technologies Kubernetes is not able to deploy and execute processes in remote environments, it cannot replace Mesos and Marathon.
The container orchestration tools (Mesos with Marathon, and now Kubernetes) allow ASB, and thus also ASB-based platforms such as EOPEN , to seamlessly integrate in any cloud-based environment, including the DIAS platforms. These tools are aware of the available compute nodes and are thus able to automatically balance the load when new process executions are requested. They are also capable of automatically discovering the changes in the processing environments, thus providing scalability to the platform. For example, as soon as a new compute node is detected, this is added to the pool of available resources and becomes available for running the subsequent processes.
Conclusion
A framework such as ASB designed to support the development of service platforms needs to stay abreast of technology developments. Existing and new technologies will continue to be investigated and assessed in order to keep the ASB framework at the forefront of state-of-the-art solutions.
A full demonstration can be provided of the ASB solution for EOPEN.
Keywords: Development Platform, Workflow Orchestration, Distributed Data Processing, Visual Analytics, Infrastructure-as-a-Service
Unleash the power of geospatial Data with OneAtlas.
Airbus is committed to support value creation using satellite-based EO Data and Analytics.
OneAtlas platform helps innovators get started faster by giving them quick and easy access to premium imagery in both streaming and download formats. It is designed to lower risk and encourage experimentation through an intuitive experience.
Access to the latest Pléiades and SPOT satellite imagery offers accurate, up-to-date ‘true’ views of any ground activity across the globe, giving researchers and developers verifiable insights and make better informed decisions.
OneAtlas Data and Analytics has been developed for the future. Today, it takes advantage of the latest imagery and technology available, and for tomorrow, the roadmap in place incorporates groundbreaking engineering, like new satellites, high altitude pseudo-satellites and drones.
Are you ready to unleash the power of geospatial data?
Fresh water is an essential resource that requires a close monitoring and a constant preservation effort. The evolution of hydrological bodies’ water level constitutes a key indicator on the available quantity of fresh water in a given region. The limited extent of the in-situ networks currently deployed has generated a growing interest in using space borne altimetry, originally designed to precisely track ocean elevations, as a complementary data source to increase the coverage of emerged fresh water stocks and ensure a more global and continuous monitoring of their water surface height (WSH). That is why a great effort has been made over the past decade to improve altimeters’ capability to acquire quality measurements over inland waters at global scale (Biancamaria et al. 2017).
The Open Loop Tracking Command (OLTC) mode, which consists in calibrating the altimeter signal acquisition window with a prior information on the overflown hydrological surface height, represents a major evolution of the tracking function. The accuracy of the command directly determines the quality of the received waveforms. This tracking mode efficiency is such that it is now stated as operational mode for current Sentinel-3 (S3) and Jason-3 (J3) missions as well as the recently launched Sentinel-6A (S6A) mission.
Over continental surfaces, the commands are derived from a worldwide database of hydrological targets overflown by the altimeters. To ensure a smooth signal acquisition, a 10m command precision is needed. Therefore, the targets location and elevation data from hydrology users are keys to improving the on-board elevation values and consequently optimize the altimeters performances over continental surfaces (Le Gac et al., 2019). The higher the number of precisely defined targets, the more global and efficient the monitoring of emerged worldwide fresh water stocks via altimeters will be.
In this context, ESA, CNES and NOVELTIS jointly developed the https://www.altimetry-hydro.eu/ web portal to further optimize the altimeters exploitation for hydrology applications. This free online platform has three main goals: communicate on the OLTC capabilities, share its current contents with the hydrology community users and offer users a convenient way to submit their improvement requests. This web portal lets the visitor explore directly, smoothly and interactively the on-board tracking command elevation value of the operational Sentinel-3A&B and forthcoming Sentinel-3C&D missions. Two categories of users may access the platform: visitors and contributors. Visitors may display the current elevation commands and find relevant information on the OLTC. They can set the display configuration that best fit their needs: OLTC version for a given altimetry mission, vertical reference of the data (ellipsoid or geoid), ground track (number and direction) characteristics. Figure 1 shows the display of the multiple layers a visitor can activate (zoom over western France).
The users may choose to register on the platform and eventually become active contributors to the database by submitting their own targets elevation and location information under the satellite ground tracks. These inputs may either lead to update existing targets or create new targets. Contributors can choose to either interactively submit their data on the map or send a csv file. Once their data is submitted, an email lets them know that their request was correctly processed and will further be analysed before operational integration for the next OLTC tables update.
This service objective is to reach the largest possible audience to collect accurate data from worldwide users working on different hydrological bodies. Since its creation on February 2018, more than 750 visitors and 80 contributors have registered and the number of registrations is rapidly increasing in the past few months. Contributions are still scarce but updating an existing target or requesting the creation of a new one significantly impacts the altimeters acquisitions and allows to contribute to the altimetry products value chain for continental hydrology, which ultimately ensures a better management of fresh water stocks.
Finally, this presentation will include a demo of the website and we will present some of the future evolutions expected for this web service.
References:
Biancamaria, S., F. Frappart, A.-S. Leleu, V. Marieu, D. Blusmtein, J.-D. Desjonquères, F. Boy, A. Sottolichio, A. Valle-Levison, Satellite radar altimetry water elevation performance over a 200m wide river: Evaluation over the Garonne river, Adv. Sp. Res. (59), 128—146, January 2017. https://doi.org/10.1016/j.asr.2016.10.008
Le Gac, S., et al., Benefits of the Open-Loop Tracking Command (OLTC): Extending conventional nadir altimetry to inland waters monitoring, Advances in Space Research, 2019, In Press, https://doi.org/10.1016/j.asr.2019.10.031
Australia’s national science and geoscience agencies, the Commonwealth Scientific Industrial Research Organisation and Geoscience Australia, are supporting the growth and implementation of Earth observation-based products and services in South-East Asia.
The ‘Earth Observation for Climate Smart Innovation’ initiative (EOCSI) has built a new regional Earth observation analysis platform powered by CSIRO’s Earth Analytics Science and Innovation hub and Open Data Cube technology. The platform leverages the wealth of open-access Earth observation data and Amazon Web Services Singapore cloud infrastructure. Access is via Jupyter notebooks, with users having their own work space where they can develop, save and share notebooks and create their own computing clusters with Dask (Python library for distributed processing) for scalable, flexible and cost-effective analysis, from local to regional scales. Users can easily access pre-indexed data allowing them to focus on the analysis instead of sourcing and assembling Earth observation data (semi-)manually. The platform is also pre-loaded with data applications developed by Australian scientists and tailored for South-East Asian environments, including for instance: in-land and coastal water quality assessments; land cover classification; and water body mapping. These applications build upon the tools available via Geoscience Australia’s Digital Earth Australia platform.
Regional Earth observation collaboration allows us to share infrastructure, data, knowledge, expertise and ideas to address shared challenges. This new platform is being used to engage local government, business and education institutions throughout South-East Asia to take advantage of Earth observation for the development of ‘climate smart’ applications. Through training and business opportunities, we are building new and closer relationships between Australian Earth observation practitioners and South-East Asian counterparts to strengthen regional science relations, support climate resilience, and promote sustainable growth and development.
This presentation will showcase the platform, early adopters and their case studies, plus provide information on how others can engage with this initiative.
Geo Engine is a cloud-ready geospatial analysis platform that provides easy access to geospatial data, processing, interfaces, and visualization. Users can perform interactive analyses in Geo Engine. For this, they access the system in a browser-based user interface as well as with Jupyter notebooks in Python. In this presentation, we will show the fundamentals of the system and its characteristics. We illustrate this with examples from previous scientific projects. In addition, we offer an outlook on the integration of Deep Learning into the Geo Engine that utilizes its preprocessing capabilities for remote sensing data and show possible applications for a wide range of projects.
Geo Engine GmbH is a start-up of computer science and geography researchers at the University of Marburg. They develop Geo Engine as an open-source project that is well-suited for research projects. In addition, several advanced features are provided under a commercial license. Geo Engine is used in various research and infrastructure projects like DFG’s NFDI4BioDiversity, and is already running in production in commercial applications.
The recent hype with data cube access has one underlying goal: A harmonized access to remote sensing data of various kinds and from various sources. Researchers want to incorporate multiple datasets in their analyses because such combinations give a broader view on real-world phenomena, and thus lead to new insights that would otherwise not be revealed. The major challenge here is that different data formats, resolutions of sensors, coordinate reference systems, and data types make it hard to focus on the actual task without re-engineering data access and data harmonization. Moreover, those engineering problems arise again for every single task. Data cubes solve this by converting all data into a harmonized format and resolution, which is similar to creating data marts in the domain of data warehousing. However, the downside of this approach is that the created format is only well-suited for a specific set of use cases. For instance, having specified a cube with a 10m resolution would downscale data from new sensors with a 1m resolution and new use case scenarios could not benefit from this without changing the cube.
Our approach is to harmonize the data ad-hoc within the analysis workflow rather than upfront. The advantage is that we can address data in their original form, i.e. without losing information or precision. At the same time, Geo Engine automatically harmonizes multiple data sources if required. Moreover, users work with temporal datasets rather than files. In more detail, temporal datasets are an abstraction of the individual files and their location. Geo Engine takes care of loading the correct data for individual points in time and space. Thus, users define workflows by incorporating geospatial time series. For instance, when querying an infrared satellite imagery band in 100 m resolution in North America in 2019, the system knows which files to load or which S3 bucket to connect to. Data harmonization takes place whenever multiple datasets are included in a workflow. A possible downside of this approach is that preparing and indexing data combinations beforehand usually exhibits better performance afterward. We tackle this by employing caching strategies and reusing partial computation results. In practice, this provides performance similar to preprocessed data for subsequent queries or interactive workflows while offering much more flexibility.
Geo Engine provides more than pure data access. First, we implemented a data provider API that allows adding data from either local or remote sources. Second, we provide an extensible processing toolbox that, for instance, contains means for filtering and data combination. Third, Geo Engine supports parametrized and reusable workflows. Once a workflow is defined, it can be reused for different spatial regions or different points in time. In more detail, queries to a workflow define a spatial as well as a temporal extent as a first-class citizen. Workflows are suited for short- and long-running tasks alike and are defined independently of the data size. For realizing this, Geo Engine employs asynchronous and chunk-based computations, e.g., on a tile basis for raster data, to process small as well as huge datasets inside workflows. This, for instance, makes AI workflows possible where the data flows from preprocessing into model training. The resulting machine learning models are afterwards reused as Geo Engine operators. Finally, all this functionality can be accessed via a UI for exploratory, visual data handling or via Python for fine-grained, programmatic control. By providing the described functionality via OGC-compliant interfaces, Geo Engine is a well-suited service for operating as part of any geo-related process.
Datacubes are acknowledged as a cornerstone for analysis-ready data. Following the pioneer work of the rasdaman team coning Actionable Datacubes a series of epigons is emerging, with varying mileage of functionality, performance, degree of standards support as reviews like [7] and [8] show in detail. However, these give access to only a single service whereas many services with different offerings are available; hence, accessing these, e.g., for combining data from different services again is the burden of the user, requiring download, homogenization, and effectively local Big Data processing.
The EarthServer initiative is working towards the vision of a single integrated, homogenized, location-transparent datacube pool. In analogy to the term “server-less” such a federation might be called “date-center-less” as users do not need to know the concrete data location any longer.
Meantime EarthServer has established the first such data¬cube federation of Earth data centers [5]. Users get provided with a single, uniform information space acting like a local data holding, thereby establishing full location transparency. Underneath, EarthServer uses Array DBMS technology for its datacube services. We present the federation and show a broad range of real-life distributed data fusion examples.
• All rater data uniformly appear as ISO/OGC/INSPIRE coverages [4], regardless of their storage representation.
• All datacubes uniformly are offered through the OGC/INSPIRE Web Coverage Service (WCS) suite, including the Web Coverage Processing Service (WCPS) datacube analytics language [2]. These services can be extended server-side with arbitrary custom code for bespoke functionality.
• All datacubes can be combined in any data fusion, regardless of the participating datacube’s location; effectively, this establishes transparent distributed data fusion.
• Functionality is available through a wide spectrum of clients, ranging from zero-coding point-and-click clients and WCPS queries to python and R access. For example, processing results can be returned directly as python xarray or numpy arrays. But likewise OpenLayers, Leaflet, NASA WebWorldWind, Microsoft Cesium, QGIS, and others are supported.
• Here is no single point of failure in the federation, it is strictly peer-to-peer.
• The service infrastructure allows a seamless offering of both free and paid data and services, thereby integrating public data like DIASs with commercial offerings.
• Maintenance and continuous update of datacubes is done administrator-less allowing data centers to join the service without assigning staff resources.
• Fine-grain access control allows data centers to decide what data to offer, and to whom. Not only can complete datacubes be protected this way, but also regions within a datacube down to single-pixel granularity.
The large, continuously growing EarthServer federation is boosted by rasdaman, the pioneer datacube engine [1]and de-facto gold standard for datacube services. For example, in 2019 the rasdaman query language has been adopted in the SQL data¬cube extension [3]. Implemented in fast C++ it offers particular performance and scalability, including distributed query processing. Metadata can be added, maintained, and retrieved freely, in particular: INSPIRE metadata. In fact, rasdaman constitutes the acknowledged INSPIRE Coverages Good Practice [6]. This effectively integrates Copernicus and INSPIRE data seamlessly. Its concept of Virtual Coverages allows users to see single datacubes even where the underlying data are heterogeneous; an example constitute the Sentinel data coming in different UTM zones: datacube queries will access a single virtual coverage where the server internally performs all necessary mapping to the base data and their coordinate systems.
When ingesting data they can be stored in a number of formats through an OGC WCS-T standard based ETL layer which homogenizes data and metadata, provides defaults, as well as the target tiling strategy. Further tuning parameters include compression, indexing, cache sizing, etc. The resulting OGC compliant coverages represent analysis-ready space-time EO objects.
As of today, EarthServer offers a critical mass of dozens of Petabytes of multi-dimen¬sional raster data, including 2D DEMs, 3-D satellite image timeseries, and 4-D atmospheric data. Members include several DIAS European Copernicus archives, leading supercomputing research centers, as well as a series of specialized services offering high-level marine, land use, and atmospheric products. All these data are accessible with zero coding, in particular: without the need to know python, and strictly standards compliant.
Aside from continuously advancing rasdaman technically, an aggressive growth of the EarthServer federation is ongoing; a line-up of datacenters has expressed interest, and the charter for governance is being finalized.
ACKNOWLEDGEMENT
Research supported by EU EarthServer-1/-2, Land¬Support, CopHub.AC, PARSEC, CENTURION.
REFERENCES
[1] P. Baumann: Language Support for Raster Image Manipulation in Databases. Intl. Workshop on Graphics Modeling, Visualization in Science & Technology, Darmstadt/Germany 1992, pp. 236 – 245
[2] P. Baumann: The OGC Web Coverage Processing Service (WCPS) Standard. Geoinformatica, 14(4)2010, pp 447-479
[3] ISO: 9075-15:2019 SQL/MDA (Multi-Dimensional Arrays). https://www.iso.org/standard/67382.html
[4] OGC: OGC Spatio-Temporal Coverage / Datacube Standards. http://myogc.org/go/coveragesDWG
[5] n.n.: The EarthServer Datacube Federation. https://earthserver.eu
[6] n.n.: INSPIRE Coverage Good Practice. https://inspire-wcs.eu
[7] P. Baumann, D. Misev, V. Merticariu, B.H. Pham: Array databases: concepts, standards, implementations. Springer Journal Big Data 8, 28 (2021). https://doi.org/10.1186/s40537-020-00399-2
[8] H. Kristen: Comparison of Rasdaman CE & AGDCv2. https://gitlab.inf.unibz.it/SInCohMap/datacubes/-/blob/master/datacube_comparison/datacube_comparison.md
The success of tools like Google Earth Engine demonstrates the power of readily available data. Unlike the `traditional' route of manual scene selection, downloading and pre-processing, such services have all data directly available to the user to work with. In addition, the data archive and computational facilities are conveniently co-located and taken care of. Revolutionary was this combination of (i) direct access to global, full time series of satellite remote sensing data; (ii) co-location of data and computational resources; and, (iii), fast large scale analysis, via an image pyramid.
However, the use of external tools is not always favourable. For example in an educational context, where novice users are introduced to satellite data analysis in a simplified environment. Furthermore, there could be legal constraints on the data or algorithms used and external tools may not support non-standard variables, such as complex radar imagery, local coordinate systems, or regional analysis at full resolution. In these scenarios, a custom data cube could be a welcome alternative.
Like aforementioned services, data cubes provide users with standardised access to vast amounts of data, and are well suited for the spatial-temporal properties of satellite remote sensing data. They offer seamless access to the data in space and time. Especially time series analysis is simplified, as time was previously typically divided over different data products. However, pre-processing is required to generate a data cube from the individual products downloaded from a space agency, like ESA.
We developed a simple, yet effective, data cube generation script for Sentinel-2 imagery, based on python and the Zarr storage format. Generation of the data cube is a three step process. First, the imagery of the selected granule (tile) and orbit is downloaded in bulk. Second, atmospheric corrections are applied via `sen2cor' where necessary. Third, the imagery is reprojected to the desired coordinate reference system and the cube is filled or updated.
We focused on usability under standard office conditions in educational or development settings, rather than on factors relevant to production systems such as efficient storage or bandwidth cost. These data cubes should fit into storage structures typically found in office environments, and should not require complex (cloud) computing infrastructure, but may still be published on any simple web server. We demonstrate our concept on Sentinel-2 data over the Netherlands. Our data cube covers 300×340 km in 400 time steps with all bands at full (10m) resolution and occupies around 4 TB of storage per orbit for all acquisitions till the end of 2021. The data cube is publicly available (https://geotiles.nl), but may also be used offline as it fits on any larger external hard drive.
The authors would like to acknowledge the support of the Netherlands Centre for Geodesy and Geoinformatics (NCG) via a NCG Talent Program grant.
A semantic Earth observation (EO) data cube refers to a spatio-temporal EO data cube, where for each observation at least one nominal (i.e. categorical) interpretation is available and can be queried in the same instance (Augustin et al., 2019). Until now, Advanced Very High Resolution Radiometer (AVHRR) imagery and derived information products have only been accessible via file-based access, requiring a significant time investment and expert knowledge to find relevant data for analysis. The 2-year project, SemantiX, has implemented a semantic EO data cube using AVHRR imagery, Copernicus Sentinel-3 imagery and derived information, complementing and expanding the heritage AVHRR time-series. This project is a collaboration between academia and the private sector with the Austrian companies Spatial Services and SPOTTERON. The geographic focus of this prototypical implementation is on the European Alpine region (ca. COSMO-1 extent), the AVHRR time-series spans ~40 years from both NOAA and Metop satellites, and all imagery is calibrated to top-of-atmosphere reflectance. Curated analysis results (e.g. maps, time-series curves, single values) based on this implementation are integrated into the existing citizen science application, Naturkalendar, by the Viennese company SPOTTERN, opening insights to an already engaged and interested public audience. To the best of our knowledge, this work has established the first EO data cube based on semantically-enriched AVHRR and Sentinel-3 imagery and is able to share these archives and derived information beyond the scientific domain (https://www.semantixcube.net).
Two categories of information are derived from AVHRR and Sentinel-3 imagery and provided in the semantic EO data cube implementation: three essential climate variables (ECVs) and a sub-symbolic semantic enrichment. ECVs critically contribute to the characterisation of Earth’s climate system’s state, interactions and developments. Remote sensing scientists at the University of Bern derived vegetation dynamics using NDVI, snow cover extent and lake surface water temperature, and integrated them into the data cube resulting in three climate-relevant time-series. In addition to the ECVs, automated semantic enrichment has been applied to all imagery resulting in generic, pixel-based spectral categories. These multi-spectral “colors” (i.e. stable, sensor-independent regions of a multi-spectral feature space) are not land cover classes, but can be considered a property of an object or land cover type. Paired with the temporal analysis that data cubes make possible, this generic semantic enrichment can be used in a convergence-of-evidence approach as the basis for building a diversity of land cover classes because they are independent from any defined ontology, application or sensor.
Multiple technologies and research developments were leveraged in order to build a semantic EO data cube using AVHRR and Sentinel-3 imagery. The semantic EO data cube implementation is based on a dockerised architecture, uses the Open Data Cube software, and serves as a single access point with several interfaces to facilitate various services. An existing Web-based front-end developed in a previous project, Sen2Cube.at, provides a GUI-based interface for semantic querying and analysis geared towards non-expert EO users. Jupyter notebook instances provide an interactive programmatic interface for analysis. The company SPOTTERON utilises their own citizen science application framework for showcasing climate-relevant results over ~40 years to give historical context to the observations app users are recording, particularly related to vegetation dynamics and snow cover extent.
Research from the 2-year research project, SemantiX, made this contribution possible and hopes to help close the gap between scientific inquiry into one of the longest imagery time-series of Europe and public understanding of the information they contain about our climate.
EUMETSAT is charged to support users in climate services, academia, and elsewhere, including the provision of information and training, and to operate and manage its Data Centre (the historical archive of EUMETSAT’s satellite data).
EUMETSAT and the network of Satellite Application Facilities (SAFs) provide time series of satellite-derived geophysical variables relevant for atmospheric monitoring. However, currently the data formats and dissemination of these data are not homogeneous, which probably presents a barrier for potential users.
A prototype Data Cube containing time series of several geophysical variables in a homogenous format could then help reduce such barriers, and explore the interest for analysis-ready data cubes among EUMETSAT users. Tools to generate analysis-ready data are also at the heart of the User needs identified within the European Copernicus program.
We present a prototype with a live demonstration of the generation of a Data Cube that addresses satellite datasets for air quality, atmospheric composition applications with 15 products from 4 missions and 5 sensors. Input datasets are served by different providers, and the demonstration is focused on the processing of the longest possible data series with daily to monthly resolution.
The solution for the generation of the Data Cube enables users to select only the geographic region of interest and bands they are interested in, and retrieves the products on-the-fly from the providers to generate the Cube as a single NetCDF4 file, conformant to a common data model. The principle is inspired by the Earth Observation “Data Cube on Demand” (Giuliani et al. 2019). The solution is intended to tackle the rapid evolution of the datasets composing the Cube, all of which are updated frequently (e.g. daily to monthly) by providers. Moreover, a stepwise growth of data volume is expected with the release of new datasets and products.
All software used to generate the Data Cube on demand is part of the EUMETSAT Data Tailor, and is delivered as software with Data Tailor to be deployed on users personal platform and as demonstrator on Cloud-based solutions and Virtual Machines.
The Earth is undergoing changes that humans have never seen before. At the same time, open-source software is making data availability and analysis accessible to an increasingly large audience, including researchers and citizen scientists. The Open Data Cube (ODC) is an open-source software platform designed to be highly adaptable to user needs I=n a variety of scenarios. This paper investigates the use of ODC as a tool for managing data and analyzing large scale phenomena at multiple resolutions, ranging from space based to microscopic. We present an open-source architecture for multiple-scale data analysis and examine the use case of investigating Harmful Algal Blooms (HABs). Specifically, we present an architecture designed to handle data at different scales: Earth Observation data from satellites (Landsat 8 and Sentinel 2), high resolution data from Unmanned Aerial Vehicle (UAS) systems, Internet of Things (IoT) data from ground based environmental sensors and water deployable buoys, as well as data from buoy-mounted high throughput microscopy systems to designed to image and identify the individual algal cells.
Data at scale has been an increasingly discussed and utilized method for calibrating remote sensor data, performing data fusion for increased spatial and temporal resolution, and enabling automated data collection, processing and interpretation. The Open Data Cube initiative community has been seeking such resources and the work presented seeks to establish an open source pipeline for processing data at multiple scales.
The efforts presented in this work encompass data collected from a range of sensors deployed by the authors. For ground-based measurements, IoT sensor systems crafted in house include land-based measurements of barometric pressure, temperature, and humidity, as well as water-based buoys that integrate a range of sensors that include incident solar activity, water temperature, GPS location, turbidity, chlorophyll fluorescence, and an automated onboard microfluidic microscope for counting and classifying plankton. Data is also collected RGB camera equipped UASs which are deployed on an as-needed basis. This data can be fused with available satellite data from platforms such as Landsat 8 and Sentinel 2, as well as higher resolution satellite providers.
We also present ODC based software tools to enable the indexing of imagery and correlation of geospatially tagged data from both satellite and UAS sources. Specifically, we demonstrate a containerized server for storing geospatially tagged environmental data that can be queried by the ODC, as well as open source reference designs for hardware which collect ground based environmental parameters.
Coastal areas are increasingly becoming more vulnerable due to economic overexploitation and pollution. The Italian Space Agency (ASI) supports the research and development of technologies aimed at the use of multi-mission EO data, in particular of the national COSMO-SkyMed Synthetic Aperture Radar and PRISMA hyperspectral missions, as well as Copernicus Sentinels, through the development of algorithms and processing methodologies in order to generate products and services for coastal risk management.
In this context, ASI has promoted the development of the thematic platform costeLAB as a tool dedicated to monitoring, management and study of coastal areas (sea and land). This platform was developed in the frame of the “Progetto Premiale Rischi Naturali Indotti dalle Attività Umana - COSTE", n. 2017-I-E.0 (https://www.costelab.it/en/homepage-en/), funded by the Italian Ministry of University and Research (MUR), coordinated by ASI and developed by e-GEOS and Planetek Italia with the participation of National Research Council of Italy (CNR), Meteorological Environmental Earth Observation (MEEO) and Geophysical Applications Processing (G.A.P.) s.r.l. The aim of the project was to define, develop and run in a pre-operational context, an integrated system that exploits Earth Observation data to support the management of coastal areas environmental processes and risks. The platform is addressed to the institutional, scientific and industrial users and allows the study, experimentation and demonstration of new downstream pre-operational services for the monitoring of the coastal area environment.
To address the main scope of the ESA Living Planet Session “C5.05 Earth System & EO Data Cube Services and Tools for Scientific Exploitation”, in this paper we focus on the Researcher User, and how the costeLAB platform and its collaborative virtual environment allow scientific exploitation of EO data for coastal studies and downstream applications, and the scientific output to be maximized on real use cases.
The costeLAB platform provides a common entry point for several web-based EO data processing in the field of coastal zone monitoring and emergency management, to generate and visualize products by means of consolidated algorithms that users can utilize for their duty tasks. In addition, costeLAB embeds a “collaborative virtual laboratory” (the “Virtual Lab”) for researchers and developers to share, test and demonstrate innovative algorithms in order to build new processing chains.
With regard to the “entry-point” function, the platform relies on an architecture that is based, as much as possible, on free-of-charge and open source solutions and standard protocols (Liferay portal CE 6.x, APEREO CAS 5.x, JupyterLab 1.x, YAWL, Geonetwork /Geoserver, Python, Java Spring Boot; Pellegrino et al., 2021), integrates a large set of processors and algorithms, and allows sourcing of multi-mission and multi-sensor EO data (ESA Sentinels, ASI’s COSMO-SkyMed and, in future, PRISMA) from image catalogues. The rationale is to “keep applications close to the data”, i.e. allowing users to access huge amount of EO data relieving them of demanding tasks for big data download and processing in local computers. Users are therefore able to generate reliable products by means of validated algorithms with reduced processing times. An exhaustive overview of the costeLAB consolidated products is provided by Candela et al. (2021) and at the project website (https://www.costelab.it/en/products/). Acting as the only interface for a wide spectrum of data source and products, costeLAB enables the integration of different processing routines and computing technologies, and aims to maximize the cost-benefit ratio through scalable cloud systems.
The user can interact with the platform in several ways. Through the product request interface and under several operational scenarios (see Pellegrino et al., 2021, for further details), the expert user may access the list of processors available in costeLAB and select the desired product for generation, the operational scenario and the input parameters. Upon completion of the generation process, the product is added to the catalogue and is made available to the users for reference and analysis. Once the product is in the catalogue, it can be searched and displayed at any time by the authorized user. This means that any product generated in costeLAB can also be accessed by “Researcher Users” with skills in data processing, algorithm and products development that exploit the platform to the test their own algorithms and codes.
This is indeed one of the main functionalities of the other facility provided by the platform for scientific exploitation, i.e. the “Virtual Lab”. This virtual environment exploits Docker containers, is a web interface based on IPython Jupyter Notebook, includes IPython development environment, and allows the use of Python, R and Fortran as programming languages. Therein, researchers can access satellite data, exploit computing resources, run predefined image processing routines, share or develop their own code (using open source packages, such as GDAL or ESA SNAP), e.g. to search, download and process Sentinel-2 data. In practice, using specific custom-developed notebooks, researchers can access the Sentinel archive of the DHuS directly from the Virtual Lab. Users therefore can launch operations, scripts and routines in the cloud, maintaining the concept of proximity of data to the processors, in the same way as it is provided for consolidated products.
To share results and ideas among different actors of the scientific community according to their role on the platform, full integration with the central authentication system (CAS) is made through the standard OpenId23 protocol. An example of what a researcher user can do with SaaS (Software as a Service) tools and resources made available in the Virtual Lab is discussed with regard to the use of Sentinel-2 images to generate a map of the morphological evolution of terrestrial coastal ecosystems in different years. In particular, Sentinel-2 near-infrared red-green-blue (NIR RGB) images collected over the Venice Lagoon were processed into the matching Water Adjusted Vegetation Index (WAVI) map in a Jupyter Notebook of the costeLAB Virtual Lab, wherein SEN2COR was used to obtain the Sentinel-2 L2A product (Villa et al., 2021). This experience demonstrates how codes developed by researchers can be run in the platform to generate new products and, in future, be transformed into consolidated processors.
During the costeLAB project, a wide portfolio of research activities was carried out (https://www.costelab.it/en/the-scientific-research/ and related references). These activities focused on the various components of the marine-coastal environment (land-sea interface), as follows:
• coastal erosion vulnerability (Bresciani et al.)
• beach & dunes volume changes (Fornaro et al.)
• extension and characterization of riverine and coastal plumes (Falcini et al.)
• morphological evolution of terrestrial coastal ecosystems (Villa et al.)
• algorithms and products for the coastal areas dynamic (Braga et al.)
• estimation and characterization of beaching in oil spill (Santini at al.)
• algorithms and products for land use/land cover changes (Pasquariello)
• algorithms and products for extracting weather marine forcing from EO data in near coastal water (Zecchetto)
• use of EO data for testing numerical models of sea state forecasts (De Carolis et al.).
Examples will be shown during the talk in order to demonstrate the breadth of novel approaches that were developed.
References
Candela, L., Coletta, A., Daraio, M.G., Guarini, R., Lopinto, E., Tapete, D., Palandri, M., Pellegrino, D., Zavagli, M., Amodio, A., Ceriola, A., Vecoli, A., Mantovani, S., Nutricato, R., Giardino, G. (2021) The Italian Thematic Platform costeLAB: from Earth Observation Big Data to Products in support to Coastal Applications and Downstream. Proceedings of the 2021 conference on Big Data from Space, Soille, P., Loekken, S. and Albani, S., eds., EUR 30697 EN, Publications Office of the European Union, Luxembourg, 2021, ISBN 978-92-76-37661-3, doi:10.2760/125905, JRC125131.
Pellegrino, D., Palandri, M., Zavagli, M., Avolio, C., Di Donna, M., Falco, S., Candela, L., Daraio, M.G., Tapete, D., Lopinto, E., Coletta, A., Amodio, A. (2021) costeLAB: a cloud platform for monitoring activities and elements of coastal zones using satellite data. Proc. SPIE 11863, Earth Resources and Environmental Remote Sensing/GIS Applications XII, 118630Z (12 September 2021); https://doi.org/10.1117/12.2599612
Villa, P., Giardino, C., Mantovani, S., Tapete, D., Vecoli, A., and Braga, F. (2021) Mapping coastal and wetland vegetation communities using multi-temporal Sentinel-2 data. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B3-2021, 639–644, https://doi.org/10.5194/isprs-archives-XLIII-B3-2021-639-2021
Danube Data Cube (DDC) is a regional data exploitation platform built on and follows the logic of the Euro Data Cube (EDC) infrastructure, a computational environment reflecting the Digital Twin Earth concept of the European Space Agency to support sustainable development.
DDC is a cloud-based platform with data and analysis tools focusing on the Danube Basin. As a regional platform service, it demonstrates the data cube technology's data storage and analysis capabilities, maximizing the benefit of the synergy of satellite and ancillary data with dedicated analysis tools.
The DDC concept includes extensive Machine Learning capabilities, including analytical tasks and decision support algorithms. One of the key themes of the platform is water management, from regional strategy and public information to field-level irrigation management.
Currently, DDC works on a regional and a local (field-level) showcase. Both are related to water management.
Water scarcity is an increasing problem globally, yet the efficiency of irrigation water usage is around 35%. With increasing uncertainties in weather conditions, irrigation strategies must be flexible. Companies must be prepared for many scenarios while increasing resource efficiency to maintain production and contribute to food security and sustainable development.
The regional showcase of DDC is designed to create a shared understanding and facilitate cooperation between authorities and research institutions about the region's hydrological and water management issues.
For research purposes, analysis-ready data is provided in datacube format for the whole region. Datacubes contains satellite data with meteorological and soil data, with the possibility to enhance the content even further with proprietary user data. A library of algorithms for DDC regional analysis is already available on the platform.
The platform supports jupyter labs, which means that proprietary algorithms can also be implemented and tested easily in the cloud.
The local showcase aims to improve irrigation efficiency in agricultural fields significantly.
DDC offers a sandbox with a graphical user interface, where users can try out different irrigation strategies under specific weather conditions. Irrigation strategies can be tested on historical and simulated data and even on real-time forecasts. Given the increasing unpredictability of weather conditions and climate in general, such a tool significantly impacts water usage efficiency.
While this online tool is already making an impact, the offerings of DDC does not stop here at manual experimenting. A training environment is created for AI agents to find the best irrigation strategies and even carry out the appropriate strategy (making optimal decisions under uncertain conditions) in real-time.
For research purposes, DDC provides:
• direct access to datacube service, which contains satellite data with meteorological and soil data in an analysis-ready format
• access to the irrigation sandbox for interactive experimentation
• a training environment for AI agents
• already trained AI agents
Title: Application of PRISMA satellite hyperspectral imagery for man-made materials classification in urban areas: a case study in Tuscany Region (Italy)
Keywords: PRISMA satellite imagery, spectral data acquisition, fieldwork, radiometric corrections, urban artificial areas classification
The Italian Spatial Agency (ASI) has launched in 2019 the new PRISMA mission (Hyperspectral Precursor of the Application Mission) which integrates the hyperspectral sensor with an additional sensor, capable to acquire not only panchromatic images, but also VNIR (Visible and Near-InfraRed) and SWIR (Short-Wave InfraRed) data [1]. A possible application of such data is the urban areas classification using spectral data from fieldwork acquisitions to train the algorithms and to validate the results.
Considering the novelty of this mission and the data collection carried out by PRISMA sensors, this research focused on the comparison between spectral data taken by a portable spectroradiometer and that obtained from PRISMA satellite reflectance imagery. The main purpose of this analysis is to classify the hyperspectral imagery in a way to evaluate the reliability of spectral data from the PRISMA mission for such a purpose.
The pilot area considered for the collection of hyperspectral data is mainly represented by the city of Prato and surroundings areas (Montemurlo, PT; Calenzano, FI; Campi Bisenzio, FI).
Materials chosen to be part of the samples list are common man-made objects used for roof covering and for paving public and private buildings or properties. The materials that were studied during the spectral data collection missions were solar cells, bitumen, asphalt (parking lots and highway), plastic (air-supported structures), metal roof covering, wood paving, clay roof tiles, clay paving and concrete (paving and roof tiles). The test site locations were defined considering various elements: areas with large covering, owners’ availability, security conditions and ease of access, material status and quality, presence of different materials within the same site when possible.
During the data collection several spectral signatures man-made materials in different locations were sampled. The collection was acquired using a portable spectroradiometer, namely ASD FieldSpec® 3, and then post-processed by the software ViewSpec® Pro.
At each site, a white reference sample was measured in order to compute the reflectance by rationing it to the raw DN data collected from the man-made materials.
In order to compare this data from fieldwork with that from the PRISMA mission, Erdas® Imagine 2020, L3Harris Technologies ENVI®, Google® Earth Engine and Esri® ArcMap were used to process the satellite imagery in terms of radiometric and geometric corrections. The Empirical Line Correction method was used to calibrate the PRISMA imagery of reflectance while, at the meantime, a pure translation shift was applied to the panchromatic, VNIR and SWIR images in order to obtain a satisfactory georeferencing. Then, two pansharpened VNIR and SWIR images, characterized by 5 meters of spatial resolution, were produced by the fusion with the panchromatic image.
The available PRISMA image was classified, and the materials of the urban area were mapped allowing to differentiate the roofs and the paving characterised by asphalt, concrete, clay tiles, bitumen, plastic, metal, solar cells and wood. The accuracy of classification was alto assessed through ground truth activities and photointerpretation of the imagery available on Google® Earth Pro.
Reference
[1] https://www.asi.it/en/earth-science/prisma/
DLR’s Earth Observation Center (EOC) is operating a burnt area monitoring service for Europe. It is based on mid-resolution Sentinel-3 OLCI (Ocean and Land Color Instrument) satellite imagery, research of methodologies and developments of processing chains (Nolde et al. 2020) and provides burnt area information twice a day in near-real time.
The service is fully automated and targeted at supporting both, rapid mapping activities and timely post fire damage assessment. It is designed incrementally, in a way that generated results are refined and optimized as soon as new satellite data becomes available. Besides the burn perimeter and detection date, the output data also contains detailed information regarding the burn severity of each detected burnt area
While the service is primarily intended for continental-scale monitoring of wildfire occurrence, the accumulated results allows the analysis of multi-year development trends regarding the mentioned parameters in addition.
This study, firstly, demonstrates the capabilities of the wildfire monitoring service, and secondly, analyses trends regarding fire extent, seasonality, and burn severity for the region of Europe regarding the recent years. The results are set in relation with findings derived for study areas outside Europe, namely California / USA and New South Wales / Australia.
The focus of the study is put on fire severity, since this information is not present in most common, large scale burnt area datasets. Yet, fire severity is a critical aspect of fire regimes, determining fire impacts on ecosystem attributes and associated post-fire recovery.
In addition to the analysis of large-scale wildfire activity, the results of the burnt area monitoring service can be utilized to monitor the spatio-temporal evolution of large lava flow events in near-real time, as for example the 2018 Lower East Rift Zone eruption at Kīlauea Volcano, Hawaiʻi or the 2021 eruption on La Palma.
Reference:
Nolde, M., Plank, S., & Riedlinger, T. (2020). An Adaptive and Extensible System for Satellite-Based, Large Scale Burnt Area Monitoring in Near-Real Time. Remote Sensing, 12(13), 2162.
"In the spring of 2014, armed conflict broke out in the Luhansk and Donetsk provinces of Ukraine. The use of ballistic and rocket artillery in this conflict has inflicted severe damages to wide areas of the provinces, with far-reaching impacts on migration, on public health, economy, agriculture, and on the environment of the region. The accurate mapping of artillery craters in the conflict is a crucial step in addressing these impacts, as it has the power to provide estimations of potential unexploded ordnance hot-spots on a conflict-wide scale. In turn, these estimations provide a valuable tool for post-conflict policies of restoration of the natural
and human landscapes. This includes the delineation of safe and unsafe zones, the production of information for returning civilian populations, and the establishment of policies of de-mining and aid distribution. The problem remains, however: how accurately can the effects of artillery - specifically its connection to the explosive remnants of war - be detected and mapped?
Utilizing very high resolution (VHR) multispectral satellite imagery combined with artificial intelligence, an automated artillery and rocket crater detection methodology was produced. The UNet semantic segmentation convolutional neural network was chosen as the classifier as it has demonstrated a robust ability to detect objects in medical applications, as well as more recently in remote sensing tasks such as celestial crater detection and terrestrial objects such as trees. In this project, we assessed the UNet CNN’s ability to detect contemporary artillery craters from VHR multispectral imagery. The UNet CNN is trained on rocket and artillery craters from the 2014 conflict. Success in detecting artillery and rocket craters was assessed using geographically independent model application, and a stratified random sampling technique to obtain binary machine learning metrics of sensitivity, precision, and F1 score. Size characteristics were also assessed to ascertain the changes in CNN classification proficiency and sensitivity to different crater sizes from varying weapon sources. The trained CNN model developed for this dissertation was able to find 89% of craters when compared with a human marker, indicating its initial proficiency at the task of crater detection. Crater size was found to have a positive correlation with all performance metrics, indicating that the model improved in the task of crater detection as the size of crater increased."
On a global basis, disasters strike regularly, in both developed and developing countries. Occasionally however, disasters take on catastrophic proportions, either because of particularly vulnerable populations, a dramatic natural event, or exceptionally unfortunate circumstances. Hurricane Katrina, the Haiti 2010 Earthquake, Typhoon Haiyan or the Great East Japan tsunami are examples of catastrophes that hold a special place in our collective memories as mega-disasters from which populations and governments take years to recover and rebuild. Since 2014, the Committee on Earth Observation Satellites (CEOS) has been working on means to increase the contribution of satellite data to recovery from such major events.
These efforts led to the creation of an ad hoc team on the use of satellites for recovery, co-chaired by the French Space Agency CNES and the World Bank/GFDRR, which has published an Advocacy Paper on the topic, as well as a four-year pilot following Hurricane Matthew which struck Haiti in 2016, causing catastrophic and long-lasting damage.
Following the successful demonstration of the technical merits of the Recovery Observatory Concept, CEOS together with the World Bank, UNDP, and the European Union created a RO Demonstrator Team and approved in 2020 a three-year demonstrator which aims to create a series of 3 to 6 ROs after major events between now and late 2023. The Demonstrator works on best efforts basis, with partners (satellite agencies, value adding companies, universities, government agencies) providing data, products, and services performed on a no exchange of funds basis.
A first test case on a small scale was implemented late 2020 after the Beirut Explosion of August 2020. A first full-scale RO activation was undertaken after the Eta and Iota Hurricanes of October and November 2020, covering areas in Honduras, Guatemala, Nicaragua and El Salvador, and a second RO activation took place in the days following the August 14th 2021 earthquake in Haiti.
As the first activations wrap up and new activations are considered, some success is already evident. Satellite-based products have been used to support efforts such as Post Disaster Damage and Needs Assessment (PDNAs) reporting providing faster and more accurate overall impact assessment for key sectors such as housing, infrastructures, agriculture and environment. A few conclusions are already evident:
• Satellite data is a useful resource for many stakeholders including those in charge of rebuilding after major disasters.
• Satellites offer unmatched range and reach, often enhancing rapidity and complementing field surveys over large areas.
• Satellites can provide regular reconstruction and rehabilitation progress monitoring that is less time consuming compared to a field survey
• Use of Satellite data in post disaster assessments are not well understood in the reconstruction and recovery world.
• More capacity development is required for local and international users to better understand which products could be useful.
The RO Demonstrator will continue to generate Recovery Observatories on a best effort basis between now and late 2023, before reporting back to CEOS and other partners on recommendations for increasing the use of satellite data for recovery.
Mt. Etna is one of the most active volcanoes on Earth that in the past few decades has erupted virtually every year. Mt. Etna has been appointed as Permanent Supersite since 2014 and it has renewed every biennial period from the Scientific Advisory Committee of the GSNL Geohazard Supersite and National Laboratory initiative of the Group of Earth Observation (GEO) and the Committee on Earth Observation Satellites (CEOS) Disasters Working Group. The Mt. Etna Supersite is managed by INGV Catania Section - Etna Observatory. Its implementation was largely based on the results of the EC FP7 MED-SUV (MEDiterranean SUpersite Volcanoes) project, which was aimed at supporting the implementation of the Supersite concept on both Mt. Etna and Vesuvius / Campi Flegrei volcanoes. The MED-SUV Data Portal is the main operational result of the project which aims to share a huge list of data and products. At the present day the Portal is moving into a new e-infrastructure in the framework of the European Plate Observing System European Research Infrastructure Consortium (EPOS-ERIC) in order to guarantee compliance for different domains within the European scientific community. Details about the new infrastructure will be presented in this contribution. In particular, it will be discussed how the new infrastructure will approach and manage the achievement of the FAIR data principles relevant to the access to the Superiste data and products.
The recent volcanic activity of Mt. Etna offered the opportunity to test, improve and implement new data and products associated with the Mt. Etna Supersite activity. Indeed, since the end of 2020, Mt. Etna started a new period of activity with frequent effusive and explosive activity at the summit craters. At the time of writing of this abstract, more than fifty episodes have been counted in less than one year, each one characterized by strong strombolian explosions passing to fire fountains, accompanied by lava flow emission.
The main outcomes related to this recent volcanic activity concerns the upgrade of (i) a WEB-GIS service for the interactive visualization of the mean LOS (Line Of Sight) velocity maps and the related time series, obtained by processing SAR SENTINEL 1A / 1B data and (ii) the exploitation of Plèiades imagery to monitor the recent volcanic activity.
The implemented WEB-GIS service is based on the Web and on a client-server architecture, and it was designed to offer quick and simplified access to perform ground deformation analysis, without having to use a "desktop" GIS software.
The WEB-GIS interface also supports base maps and accessory levels (for example the level of the Mt. Etna geological structures in vector format) with an on / off option to view.
The interface provides the basic options for query and interactive display of the time series, and a tool for analyzing and comparing the time series is also available.
The processing of Pléiades imagery, acquired in stereo or tri-stereo configuration, allowed for the calculation of 1-m spatial resolution Digital Surface Models (DSM) of the volcano. Moreover, by differencing successive DSMs obtained from the Pléiades data, the emplaced lava flow fields were mapped, the volume of the lava flow fields formed during 2021 eruptive activity was estimated, as well as the changes in the morphology of the summit craters, including the growth of the South East Crater that became the new top of the volcano.
Both SAR and Optical -based products were integrated with the multidisciplinary observing system managed by INGV to monitor the intense period of volcanic activity and to support the Disaster Risk Managers (DRM), e.g. Civil Protection and local authorities in their activity.
Figure Caption: (A) (upper part) Single Time Series Visualization; (lower part) Snapshot of Comparing Time Series tool. (B) (upper part) Triplet of Pléiades data processed to obtain the 2021 Mt. Etna DSM (lower part).
Between 12th and 15th July, 2021 the heavy rain brought by low-pressure system Bernd in Western Europe caused many catastrophic floods. The affected countries include Germany, United Kingdom, Austria, Belgium, Croatia, Italy, Luxembourg, the Netherlands, and Switzerland. The total reinsurance loss could be up to $3 Billion; the economic damage was expected to be around $6 Billion. The casualties amounted to over 240 people, 196 of which came from Germany. On 16th July EFTAS was contracted with Nordrhein-Westfalen (NRW) to detect the flood areas in the state using remote sensing.
In this presentation, we will propose an in-time flood monitoring system based on SAR data for future flood disaster management. Our works to detect flood in NRW will be presented as case studies. SAR plays the major role in our monitoring system for two reasons. First, an active imaging SAR is independent from illumination and slightly disturbed by the weather. Second, the techniques for flood detection have been long refined and standardized in national and international institutes. The whole process can be automated and accelerated by optimizing algorithms, enhancing soft- and hardware, and simplifying manual operation. However, the temporal gap between event occurrence and image acquisition ranges opportunistically, says, from minutes to more than one day for Sentinel-1. This uncertainty hampers the practice in rescue operations or in a monitoring service, e.g., Copernicus Emergency Management Service and DLR Flood Service.
Based on our experience and knowledge, we believe a so-called (near) real-time satellite-based flood monitoring has not yet existed for civilian use. A real-time monitoring could only happen if a geosynchronous SAR stands by 24/7 like a reconnaissance satellite. Our aim is to launch an "in-time" monitoring service in cooperation with the Capella Space constellation. Capella is capable of delivering VHR SAR imagery over a venue after an order request is placed in 6 hours on average. This means, an average of 4 images in 24 hours is available for flood monitoring, which is unprecedented so far. The delivery time will be further shortened in future with improved processing capabilities and expansion of the satellite constellation. This advantage will be only exclusive in our service for international market in the near future. The key is to integrate and automate the image supply and the procedure of flood detection into an efficient customer-oriented service. Last but not least, we will also propose our strategies to tackle the difficulty in flood detection for built-up and vegetated areas.
The need to optimise productivity in agricultural and forestry resources has led to a progressive increase in the development, evolution and uptake of EO based products by the agriculture and forestry sectors. These efforts are key to the achievement of the objectives set out by several of the UN Sustainable Development Goals (SDGs): SDG2- Zero Hunger; SDG4 - Sustainable Consumption and Production; SDG6 - Clean Water and Sanitation; SDG13 - Climate Action and; SDG15 - Life on Land.
In NextLand an attempt has been made to develop a wide set of operational midstream agriculture and forestry EO based services under a common service delivery platform, leveraging on GEOSS and Copernicus data and products, which can be complemented by the assimilation of other very high resolution EO and in situ data streams. The focus of this presentation is on forestry products in NextLand project that include forest change detection (deforestation and single tree cut), forest fire burn scar, forest density and statistics, tree health indices, forest classification. Products overview will be presented that gives the users a good overall idea about the product’s robustness.
Forest Change Detection product serves to calculate the area of forest loss that is very useful for governmental, inter-governmental, private and non-governmental sectors for better decision making. Large-scale deforestation is an extremely harmful practice because of its direct impacts on local biodiversity and terrestrial climate. This activity is very common in underdeveloped countries with large forested areas, causing damaging consequences, such as increasing atmospheric carbon, droughts, and the extinction of important vegetal and animal native species. Forest Burn Scar refers to areas that are destroyed by a forest fire, which is one of the most severe natural hazards in the forestry sector. It impacts ecology structure, atmospheric systems, as well as have detrimental effects on the living environment. Detecting and assessing the spatial extent and distribution of burn scar supports forest managers to process efficient vegetation recovery and post-fire management. Forest Density and Statistics contains several products on a monitoring platform for tree growth trend. They provide useful information for the forest managers and the wider public about the tree crown density, extent or sparsity of trees. Tree Health Indices cover Normal Difference Vegetation Index (NDVI), Fraction of Photosynthetically Active Radiation (fPAR), fraction of green vegetation COVER (fCOVER), Leaf Area Index (LAI) and Canopy Chlorophyll Content (CCC). They are mainly used to support decisions on tree and forest health. Forest Classification product provides information about location of selected tree species. It is a used in forest management to estimate growth and to monitor forest health.
With the exception of the product Forest Density and Statistics', all of the products described above are generated based on the Sentinel-2 data. The use of data provided free of charge reduces the overall cost of the service. At the same time, reliance on data from the satellites of the European Copernicus programme ensures continuity and regularity in the provision of source data. Various methods of satellite data processing were used in the development of products, from a simple calculation of indicators, e.g. in the case of Forest Fire Burn Scar and NDVI, to advanced machine learning models in the case of the Forest Classification.
In Germany and many other industrial nations, it is a political goal to reduce area consumption and land take. Decision-making and area statistics in this context are mainly based on official cadastral data. In Germany, this source of data has two main drawbacks: First, it is produced and updated at different temporal intervals in the federal states, such that a Germany-wide dataset never depicts one single reference year. Second, it mainly holds information on land use rather than land cover. This means that changes between years may occur in the data even if the physical properties of an area have not changed. Because of this, the “incora” project investigated the potential of Copernicus Sentinel-2 data to provide annual land cover and imperviousness maps from which spatial indicators of land take, urbanisation, and settlement and infrastructure can be derived. The overall project and the geospatial models for indicator calculation are presented in a dedicated companion contribution, while this poster presentation will serve as a complement to highlight in-depth the classification and imperviousness mapping approach.
The classification approach can be summarized as follows:
To minimise the need for preprocessing, we made use of Sentinel-2 Level3A WASP data provided by DLR. This data represents atmospherically corrected monthly cloud-free temporal mosaics of standard Sentinel-2 tiles. As cloud coverage prevents truly cloud-free mosaics for every month (especially during winter), a preselection of suitable months and further removal of remaining clouds was performed. Spectral indices were calculated from the time series and temporal index statistics (minimum, maximum, median, range) were derived. Next, an automatic training data generation approach was implemented. Therefore, a set of rules was applied for each of the six target classes high vegetation, low vegetation, water, built-up, bare soil, and agriculture based on auxiliary datasets (OpenStreetMap, Copernicus High Resolution Layers, S2GLC Land Cover Map of Europe) as well as spectral index statistics of the Sentinel-2 input data itself. From the resulting potential training areas, 50,000 pixels were sampled randomly to serve as training input for a Random Forest classificator.
The final land cover classification maps were validated for the federal state of North Rhine-Westphalia, as its open data policy allowed for direct access to official data to serve as reference. We found overall accuracies of 88.4%-92% across years with high accuracies for the class “built-up” (89.8% - 99.3%) which is the most relevant for the analysis of settlement and infrastructure.
Parallel to the land cover classification approach, we also carried out an imperviousness mapping based on a spectral unmixing algorithm. The imperviousness products estimate the soil sealing per pixel and are mapped as the degree of imperviousness in the range of 0-100%. As built-up areas feature semi- or fully sealed surfaces, we used the imperviousness layer to represent built-up land. Imperviousness change layers were then generated to detect built-up land change between years, which is represented as the degree of imperviousness change above an empirically derived threshold. One key advantage of this approach is that it is not prone to misclassifications that might be present in the annual classification products due to the discretization of spectral information. The main disadvantage is that this change product does not hold information on other land cover types.
Both classification and imperviousness change products complement each other regarding information content and could be further used for the calculation of static and dynamic spatial indicators of area consumption and land take.
The Agenda 2030 for sustainable development, including the Global Sustainable Development Goals (SDGs), was adopted by the heads of countries and government at the UN Sustainability Summit in New York in September 2015. The global SDGs will set the course for the global community and will contribute to sustainable development by 2030. Quantifiable indicators are used to evaluate and measure the progress of the 169 SDG targets within and across countries. The SDGs framework currently consist of 231 indicators which are based on demographic and statistical data or on data from models or surveys. While some countries have means to measure these Indicators, others lack the data, methods or relevant actors / stakeholders responsible for specific indicators, which challenges the development of consistent and comparable information. The Inter-Agency Expert Group on SDG Indicators (IAEG-SDGs) developed the SDG Global Indicator Framework, a Tier based classification system, that categorizes indicators into 3 Tier classes based on the level of data availability and methodological development. The UN recently encouraged the use of Earth Observation (EO) data as an alternative data source for monitoring and supporting the implementation of the SDGs. The current fleet of available EO satellites, particularly those of the EU’s Copernicus program, provide freely available data from which timely statistical results can be derived while providing a consistent means of reporting and measuring the SDGs.
The Cop4SDGs (Copernicus for SDG) project was launched between the Federal Agency for Cartography and Geodesy (BKG) and the German Environment Agency (UBA) and is funded by the German Federal Ministry for Environment, Nature conservation and Nuclear Safety. The aim of the project is to examine the extent to which the SDGs can be verified and reported by using Copernicus data and products. In addition, data and indicator gaps in the national reporting process are to be closed.
In the first phase of the project, a systematic overview of the current state of art and knowledge on satellite-based monitoring of sustainability indicators was developed. On the basis of global and national sustainability indicators, a comprehensive review has been carried out for selected indicator areas. As a result, 14 indicators were identified that can be measured directly or indirectly using EO data. For two of those indicators (6.6.1 Change in the extent of water-related ecosystems over time, 3.9.1 Mortality rate attributed to household and ambient air pollution) initial feasibility analyses were carried out as a starting point for further discussions with those responsible for reporting. In addition, further indicators related to goal 15 are analyzed. Besides the Sentinel data, Copernicus Land Service products such as the Corine Land Cover, the High Resolution Layer Water and Wetness and the High Resolution Vegetation Phenology and Productivity for calculating the different indicators have been explored. Methods and storymaps will be developed to help with the future calculation of these indicators. Subsequently, the potentials of transferring the results for other environmental policy measures will be examined and policy recommendations will be developed.
Solid waste management is an essential utility for sustainable urban living, and meanwhile long remains a governance challenge that requires holistic solutions, particularly in the Global South where population density is very high (Ferronato & Torretta, 2019). The objective of this study is to develop a new approach using Landsat-8 and Sentinel-2 time series to investigate the spatial distribution of open waste dumps in Vietnam. There has been clear scientific evidence of water and soil contamination caused by open waste dumping (Eguchi et. al., 2013, Sharma, Gupta & Ganguly, 2018), which is a dominant waste disposal method in many South Asian countries. Household waste in those places often contains a high proportion of organic waste (Pfaff-Simoneit et al., 2021) and is frequently disposed of with certain waste types such as wastes from Electrical and Electronic Equipment (WEEE) and Construction and Demolition (C&D) waste. Not only do the open dumps pollute surface and groundwater sources via leachate migration with substances such as heavy metals, PBDEs, HBCDs, and other hazardous substances (Alam, A. et. al., 2017), it also releases harmful gases into the atmosphere, even more so than the controlled landfills, including methane which has 25 times higher GWP than CO2 (Ferronato & Torretta, 2019). Besides, open waste dumping is one of the major sources of marine plastic pollution: It is estimated that up to one million tons of plastic waste are released into the waters off Vietnam's coast via rivers every year (Meijer et al., 2021).
Despite a significant knowledge gap on the spatial distribution of open dumps owing to insufficient data, very limited research has exploited the potential of open remote sensing data to trace open dumping activities both spatially and temporally. Besides, most of the relevant research focuses on risk assessment of the landfills at known locations. This present study aims to assess the feasibility of open dump detection using Landsat-8 and Sentinel-2, with the use of thermal anomaly and methane proxy for thresholding, which reduces the dependency on labelled data for training machine learning models. It aims to detect dumping activities in a scalable manner using hierarchical classification by retrieving thermal radiation and methane columns based on the time series. This work explores the potential of cloud computing in Google Earth Engine for time-wise spatial analysis. First, potential open dumping sites, namely barren soil, are extracted using Sentinel-2 imagery with the random forest classifier, trained with labelled land-use data over the region. Then, thermal radiation and methane columns are derived monthly from 2019 to 2020 using band 10 from Landsat-8 and band 11, 12 from Sentinel-2. The methane indicators are developed using Multi-band–multi-pass (MBMP) retrieval, which was proposed to monitor methane point sources (Varon, D. J., et. al., 2021). The output time series, together with texture measures of the satellite imageries, is used for thresholding on the potential sites to extract probable open dumping sites. The results indicate superior performance of the present model to overcome the hurdles of limited training data and heterogeneity of open dumping sites, in comparison to the conventional multi-class classification, which is strongly subjected to training data sufficiency, as well as class-balance of the dataset. The present study proposed earth observation-based approach to investigate the development of open dumping activities in Vietnam, which can potentially bridge the gap between local activities and regulatory efforts, and can be expanded to a larger scale, which could contribute to risk analysis, urban planning, as well as marine litter tracing with further interdisciplinary efforts.
Despite the lack of systematic spatial data on open dumping activities in Vietnam and its critical importance on sustainable solid waste management and policy formulation, there is a significant knowledge gap on a scalable method to detect anomalous and heterogenous open waste dumps using remote sensing data. The application of thermal anomaly and methane indicator in a hierarchical classification outperforms the conventional machine learning approach for the detection of open dumping activities which can be potentially applied on a national scale.
References
Alam, A., Tabinda, A. B., Qadir, A., Butt, T. E., Siddique, S., & Mahmood, A. (2017). Ecological risk assessment of an open dumping site at Mehmood Booti Lahore, Pakistan. Environmental Science and Pollution Research, 24(21), 17889-17899.
Eguchi, A., Isobe, T., Ramu, K., Tue, N. M., Sudaryanto, A., Devanathan, G., ... & Tanabe, S. (2013). Soil contamination by brominated flame retardants in open waste dumping sites in Asian developing countries. Chemosphere, 90(9), 2365-2371.
Ferronato, N., & Torretta, V. (2019). Waste mismanagement in developing countries: A review of global issues. International journal of environmental research and public health, 16(6), 1060.
Kapinga, C. P., & Chung, S. H. (2020). Marine plastic pollution in South Asia. Development Papers, 20-02.
Meijer et al., 2021. More than 1000 rivers account for 80% of global riverine plastic emissions into the ocean. Science Advances 7(18). DOI: 10.1126/sciadv.aaz5803
Pfaff-Simoneit, W., Ziegler, S., Long, T.T. 2021: Separate collection and recycling of waste as an approach to combat marine litter - WWF pilot project in the Mekong Delta, Vietnam, in: Kuehle-Weidemeier, Matthias (2021): Waste-to-Resources 2021, 9th International Symposium Circular Economy, MBT, MRF and Recycling, online conference, ICP Ingenieurgesellschaft mbH, Karlsruhe 2021.
Sharma, A., Gupta, A. K., & Ganguly, R. (2018). Impact of open dumping of municipal solid waste on soil properties in mountainous region. Journal of Rock Mechanics and Geotechnical Engineering, 10(4), 725-739.
Tun, T. Z., Kunisue, T., Tanabe, S., Prudente, M., Subramanian, A., Sudaryanto, A., ... & Nakata, H. (2021). Microplastics in dumping site soils from six Asian countries as a source of plastic additives. Science of The Total Environment, 150912.
Varon, D. J., Jervis, D., McKeever, J., Spence, I., Gains, D., & Jacob, D. J. (2021). High-frequency monitoring of anomalous methane point sources with multispectral Sentinel-2 satellite observations. Atmospheric Measurement Techniques, 14(4), 2771-2785.
Land degradation neutrality in Agenda 2030 is the scientific, politic, economic, and social UNCCD conceptual framework in sustainable development in epoch of world economy decarbonization – net zero carbon 2050. For monitoring this LDN process to decision making was proposed SDG 2.4.1. and 15.3.1 indicators on international and national levels.
To calculate NDVI index for 8 Ukrainian regions with using CREODIAS platform the Sentinel-1,2 and Landsat-8 mission images and in-situ Ukrainian data was analyzed.
Calculation of the NDVI index, which is available from EO data of Landsat 8, Sentinel 1, 2, comes first for different regions of the Ukraine: Chernihiv, Mykolaiv, Dnipropetrovsk, Kherson, Vinnytsia, Zhytomyr, Cherkasy, Sumy. In addition, NDVI is often used in Ukraine as around the world to monitor drought, forecast agricultural production, assist in forecasting fire zones, and desert offensive maps. Farming apps, like Crop Monitoring, integrate NDVI to facilitate crop scouting and give precision to fertilizer application and irrigation, among other field treatment activities, at specific growth stages. NDVI is preferable for global vegetation monitoring since it helps to compensate for changes in lighting conditions, surface slope, exposure, and other external factors.
The interpretation NDVI indexes on the examples of October 2021 normalized difference vegetation indexes for different regions of the Ukraine, for example Chernihiv, Mykolaiv, Dnipropetrovsk, Kherson, Vinnytsia, Zhytomyr, Cherkasy, Sumy is underpinned by a conceptual model that perceives land as a socioecological Ukrainian system (a coupled human-natural system); hence, labelling a land unit in Ukraine as degraded requires a synergy of utilitarian (human-driven) and ecological (ecosystem function and structure) in the context of SDG 2.4.1 and 15.3.1 index calculation. Land cover classification systems derived from EO Data from CREODIAS platform and in-situ data are important tools to describe the natural and urban environment of the Ukraine for different science research demands and effective agriculture workflow process organization [1, 2].
The authors acknowledge the funding received by Horizon 2020 e-shape project (Grant Agreement No 820852).
REFERENCES
1. Nataliia Kussul, Mykola Lavreniuk, Andrii Kolotii, Sergii Skakun, Olena Rakoid & Leonid Shumilo (2020) A workflow for Sustainable Development Goals indicators assessment based on high-resolution satellite data, International Journal of Digital Earth, 13:2, 309-321, DOI: 10.1080/17538947.2019.1610807.
2. N. Kussul, A. Shelestov, M. Lavreniuk, I. Butko and S. Skakun, "Deep learning approach for large scale land cover mapping based on remote sensing data fusion," 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2016, pp. 198-201, doi: 10.1109/IGARSS.2016.7729043.
Creating sustainable development index combining satellite data with other data sources: application on the assessment of the sustainability of the global tourism industry
Abstract
Since its creation, the aim of the HDI (Human Development Index) was to rank countries based on human, economic, health and education data.
In 2010, the Human Development Report introduced an Inequality-adjusted Human Development Index (IHDI). While the simple HDI remains useful, it stated that "the IHDI is the actual level of human development (accounting for inequality)", and "the HDI can be viewed as an index of 'potential' human development (or the maximum IHDI that could be achieved if there were no inequality)".
These kinds of improvements are crucial but before 2020 there was no place for the environment in the calculation of these indicators. Yet we can consider the following correlation: generally, higher the HDI, the stronger is the pressure on the environment.
Tourism is one of the pillars of the modern economy and a significant vector of human development. It constitutes more than 10% of global GDP. The number of international tourists is expected to hit the 1 Billion bar in 2020 and forecasted to rise to 1.8 Billion in 2030, making it crucial to find efficient ways to handle this growth and preserve the fragile destinations. Additionally, more than 65% of European travelers have declared that they are striving to make their travels more sustainable but do not find the right information or the possibility to assess their environmental footprint.
The present COVID crisis is underlining the importance of steering the tourism activities into sustainable development, on a global scale. Either for environmental reasons or socio-economic motivations, sustainability is now at the core of all tourism organizations and roadmaps.
We present here the implementation of a unique sustainable development indicator for the tourism industry, the Tourism Sustainable Development Index (TSDI). It combines satellite earth observation data, in-situ measurements and statistical data. These socio-economic datasets combined with environmental data enables the computation of a single index assessing the sustainability of tourism in a given area.
This indicator is resilient regarding data gaps and at the same time flexible to accommodate heterogeneous and new data sources over time. Moreover, the TSDI is meaningful from a scientific perspective and reflects the correlation between the economic development of the tourism activity and its environmental impact.
The TSDI mathematical formulation combines the human development and environmental impact factors. The environmental impact factor calculation includes earth observation satellite data, especially air quality from Sentinel-5p atmospheric measurements, water quality from Sentinel-3 oceanography sensors and vegetation cover from Sentinel-2 optical images. The use of satellite remote-sensing data is key and presents many benefits : space data is a reliable and objective measurement of the state of our planet, it can be systematically applied anywhere without depending on ground situation and with no additional cost thanks to the open and free policy of the Copernicus program. Space data is combined with other sources, evaluating important environmental factors such as CO2 emissions and biodiversity, to provide a complete picture of the environmental state of an area.
The human development factor includes indicators on urbanization and tourism activity as well as classical human development indicators already included in the HDI such as education index and life expectancy.
The formulation of the TSDI includes a notion of “boundary” that enforces the idea that if a region, location or a country is doing well along one dimension, it is not allowed to do worse on other parameters. This calculation method allows to “deactivate” a parameter if it is under the boundary, limiting the impact on the final index of individual factors and favoring areas where all sustainable development factors are under control. There is no free meal with the TSDI !
A large part of the urban population in Low- and Middle-Income Country (LMIC) cities lives in deprived urban areas (DUAs), i.e., areas that are deprived in terms of housing and environmental conditions, services, infrastructure etc. (UN-Habitat, 2016). For example, in African cities, the urban poor form the majority of the urban population. However, data on the location and characterisation of DUAs are commonly not available. The absolute and relative share of population living in DUAs calls for acknowledging their existence and understanding the local conditions to develop tailored improvements. Their monitoring is also a global challenge linked to the Sustainable Development Goals (SDGs).
We present the joined efforts of two initiatives: the Integrated Deprived Area Mapping System (IDEAMAPS) network (https://ideamapsnetwork.org/) that leverages the strengths of the four current approaches for DUA mapping, and the SLUMAP Earth Observation (EO) project (https://slumap.ulb.be/) that aims at overcoming limitations to DUA mapping posed by the high cost of imagery acquisition and processing.
To support routine and accurate mapping and characterising of DUAs, IDEAMAPS network developed the Domain of Deprivation Framework to identify relevant geospatial and EO data for urban deprivation mapping and analysis (Abascal et al., 2021). This framework builds on existing deprivation frameworks (e.g., the English Deprivation Index). The main rationale to model deprivation not as a binary phenomenon, but as a continuous layer, is the high level of uncertainties of slum versus non-slum maps, as even local experts have difficulties agreeing on boundaries. Existing deprivation mapping frameworks typically use census data, with availability issues and low temporal granularities, which quickly go out of date in fast growing and transforming LMIC cities. The IDEAMAPS Domains of Deprivation Framework groups locally-meaningful DUA indicators into 9 domains at 3 scales. Two domains reflect deprivation measured within households. Four domains reflect area-level deprivations (social hazards & assets, physical hazards & assets, unplanned urbanisation, and contamination). Three domains reflect aspects of deprivation that relate to the connectivity to the city (i.e., infrastructure, facilities & services, and governance). A guide for authorities and users (https://ideamapsnetwork.org/toolkit-goverment) provides guidance for the operationalisation of all domains building on openly available geospatial data (e.g., night-time lights, air pollution) and contextual image features (e.g., using Sentinel-2 imagery).
Therefore, IDEAMAPS and SLUMAP work on DUA models that utilise open geospatial and EO data. In particular, EO data allow for routine mapping of DUAs and characterising aspects related to the urban environment (e.g., waste accumulations, hazard), urban morphology (e.g., built-up densities, availability of open/green spaces) and infrastructure (e.g., availability of street-lights, road access). EO approaches are commonly top-down, with no or limited user interactions, whereas our framework combines EO data with user engagement and the inclusion of data from local communities, acknowledging the importance of citizen science. Thus, the information needs and requirements of different user groups are the guiding principles for the development of a flexible DUA mapping system.
Results of machine learning models, using classical algorithms such as Random Forest as well as popular deep learning models, show that with open and freely available EO data, DUAs can be mapped and characterised at city scale. We showcase results for several African cities (e.g., Nairobi, Kisumu, Lagos). The degree of deprivation mapping approach uses a gridded system that labels each grid cell with a continuous deprivation index value (between 0-1), showing the least to the most deprived grid cells. Local data collected together with community groups in the respective cities are used to train and validate the models. Outputs show that patterns of deprivation match well with the location of locally known “slum” areas and also highlight other DUAs (e.g., atypical slums, low-income housing areas). The continuous scale of least-to-most deprived obfuscates the boundaries of slums or informal settlements (reducing the likelihood to contribute to stigmatisation), while supporting multiple use cases for these maps. This flexible mapping system enables local users, e.g., for local SDG 11.1.1 monitoring, to use locally meaningful thresholds to classify results into binary maps of slums versus non-slums. This can be done within a local engagement process and is not based on the assumption of EO experts with limited to no local contextual knowledge. Such a locally acceptable binary classification, could be used for regular local SDG reporting.
The proposed Integrated Deprived Area Mapping System (IDEAMAPS) framework (https://ideamapsnetwork.org/) provides a flexible gridded mapping system, based on this concept. SLUMAP showcased the potential of free EO-data for, on the one hand, producing city-scale maps that localise the diversity of deprivation, and on the other hand, mapping their characteristics with a high level of detail. The proposed approach has the advantage to be scalable, transferable and allows for local adaptations in the form of a user-centered mapping approach. Results support cross-disciplinary information needs on DUAs and show EO data's potential to be combined with geospatial data for local SDG monitoring.
References:
Abascal, Á., Rothwell, N., Shonowo, A., Thomson, D. R., Elias, P., Elsey, H., . . . Kuffer, M. (2021). “Domains of Deprivation Framework” for Mapping Slums, Informal Settlements, and Other Deprived Areas in LMICs to Improve Urban Planning and Policy: A Scoping Review. Preprints 2021, 2021020242. doi:10.20944/preprints202102.0242.v1
UN-Habitat. (2016). Slums Almanac 2015-16. Tracking Improvement in the Lives of Slum Dwellers. Nairobi, Kenya.
In recent years there has been increased interest in the concept of “Smart Statistics”, which can be viewed as the future extended role of official statistics, whereby traditional data sources (survey and administrative data) are complemented by information from sensors (such as satellite imaging and a host of environmental sensors), smartphones (including GPS), behavioral data (e.g. data from online searches, websites’ visits and activity such as travel or accommodation payments) or even social applications data (comments on social media, etc.). These data sources can provide entirely new insights into social and economic trends to drive public policy making.
Earth Observation (EO) can constitute an important component of smart statistics, being a subset of the aforementioned Big Data. Organisations such as the United Nations Statistical office (UNSTAT), the European Statistical Office (Eurostat, e.g. through the ESSNet Big Data II project with explicit EO activities) as well as many national statistical institutes/offices (NSIs/NSOs) and supporting organisations are currently seeking to incorporate satellite imagery and other ΕΟ data sources (such as models and in situ platforms) into their operational workflows. The SDG framework is of particular interest providing a common ground for exemplifying such interactions as the SDG Goals and indicators are pursued both from the statistics and EO communities. ESA has supported several pilot studies in this intersectional area, such as EOStat-Poland, Sen4Stat, EO4Poverty and EcoServe, primarily focused on national agriculture statistics and assessment of environmental services. Recognising the need to go beyond such pilots, in 2021 ESA released an ITT on “EO for Smart Statistics”, which will be implemented through the GAUSS project.
GAUSS (Generating Advanced Usage of Earth Observation for Smart Statistics) is an 18 month project led by the National Observatory of Athens (NOA) working together with FMI (Finish Meteorological Institute), IGIK (Institute of Geodesy and Cartography of Poland) and Evenflow (a Brussels SME). It aims to provide specific demonstrations of the use of EO to meet key reporting needs of the corresponding national NSOs in the areas of air quality statistics (AQ), water statistics and green indicators for natural capital. It will also develop best practices for an interested user community to support further development of such workflows and solutions beyond the scope of this project.
At the core of the GAUSS project is a set of case studies which meet real identified needs of the national NSOs with the underlying aim to showcase the added value EO brings in current workflows in these fields as well as ensure the robustness of the results, taking into account the requirements of official statistics. In Greece, the project will develop high resolution AQ statistics for key atmospheric pollutants (relating to AQ Directive reporting and SDG 11.6.2), working at Local Administrative Unit level rather than the currently available coarse regional statistics. To do this it will fuse EO data from Sentinel-5P, regional models of the Copernicus Atmospheric Monitoring Service (CAMS) and data from a national network of low-cost AQ sensors. In Finland (relating to SDG 6.3.1), the project will fuse Copernicus data with in-situ data from webcams and other sensors to create an improved set of products on snow cover throughout the year. It will also create a novel product for assessing hydrological drought, based on the fusion of satellite altimeter data (Jason and Sentinel-3) with in situ measurements. In Poland, a set of statistics on the availability and quality of green areas at commune level, a key parameter for assessing regional wellbeing (relating to Goal 3), will be created using a range of satellite sensors. In addition, the project will replicate the AQ and hydrological drought indicators workflows for Poland, confirming the transferability of the methods.
Based on these case studies, the project will elaborate a future roadmap with recommendations for further integration of EO into Smart Statistics. This will take into consideration not just the technical issues remaining to be addressed, but also the operational and regulatory barriers to increased adoption of such products in official statistics. To help identify these barriers, the project will be supported by a steering group on which key statistical agencies will be represented. This will also allow the project to exploit synergies with other initiatives in this area.
In conclusion, EO data has the potential to meet many key needs of the European NSOs. By using key case studies to explore the practical barriers to increased adoption, the GAUSS project aims to define a pathway towards real operational use of such data in official statistics.
Accurate urban population distribution maps are necessary pre-requisites for a wide range of applications related to urban sus-
tainability and planning, epidemiology, natural hazards (population at risk) and crucial elements for the monitoring of Sustainable
Development Goals (SDG). However, the quality of population data in data scarce environments such as the Global South (GS)
is unreliable both in terms of temporal and spatial consistency. The disparaging effects of this data gaps are most evident in Sub-
Saharan Africa (SSA) where census data are not easily accessible, often outdated or not available at spatial levels that allow for
sophisticated analyses. International efforts such as WorldPoP (Tatem, 2017), GHS-PoP (Freire, Halkia) and LandScan (Dobson et
al., 2000) have helped mitigate this gap by providing openly-accessible, global population distribution products at relatively high
spatial resolutions (100m-1km). Nonetheless, their quality with respect to the intra-urban level is limited, as they were mostly
designed for large scale analysis (i.e., global or national level). At the same time, SSA is facing a rapid urbanization shift with
current estimates placing more than 60 % of the African population in cities by 2050. This has led to the proliferation of deprived
neighbourhoods that often lack basic services such as adequate open space and access to clean water. As recent research has shown,
deprived urban communities are vastly underestimated in current global population products (Thomson et al., 2021), which severely
hinders efforts to address the needs of urban residents and enhance evidence-based policy making. Thus, the need to better represent
the urban population both in terms of accuracy and spatial detail is imperative.
In this research, we harness the power of Deep Learning (DL) methods and openly accessible EO data such as Sentinel-2 MSI
imagery to model and map urban population patterns in a selection of SSA cities at a fine spatial scale. Based upon disciplinary
knowledge, particularly in aspects of the required EO data combinations, predictive performance, transferability and parsimony, our
goal is to create the building blocks for creating reliable urban population products, tailored to meet the needs of SDG indicators
such as accurately measuring the population living in informal settlements.
As a proof of concept we apply our framework in Dakar (Senegal) and Ouagadougou (Burkina Faso) located in the Sahelian zone of
Africa. Both cities have exhibited strong urban and population growth trends in the last decades and provide diversity with respect
to building patterns and urban morphology.
To provide training and validation data for our DL models, we make use of the 2013 census in Dakar which was available at a
neighbourhood level (1250 administrative units), and a detailed population survey in Ouagadougou available at a coarser scale
(55 administrative units). Based on these sources, existing high-quality gridded population datasets are available at a 100 meter
resolution that were derived from very-high-resolution satellite data and served as the building blocks to feed our DL models
(Grippa, 2018).
We propose a DL approach that uses Sentinel-2 MultiSpectral Instrument (MSI) patches of size 100 x 100 pixels (i.e., 1 km2 ) as
inputs to a residual neural network, commonly known as ResNet (He et al., 2016). Specifically, the first layer of the ResNet-18
architecture was modified to accommodate the 10 m spectral bands of Sentinel-2 (blue, green, red and near infrared). Furthermore,
ReLu is used as activation function for the output layer in order to prevent negative population predictions. The network was trained
for 20 epochs with a batch size 8 and a learning rate of 10-4 , using AdamW as optimizer. Image augmentations in the form of flips
and rotations were incorporated into training.
The preliminary results are promising (Figure 1). The DL models are able to accurately predict population counts with high accuracy
both at the grid and administrative census level (coefficient of determination 0,84 and 0,80, respectively). In the final version we
will present a thorough error analysis and maps unraveling the potential of this products near-real time population mapping.
References
Dobson, J. E., Bright, E. A., Coleman, P. R., Durfee, R. C., Wor-
ley, B. A., 2000. LandScan: a global population database for
estimating populations at risk. Photogrammetric engineering
and remote sensing, 66(7), 849–857.
Freire, S., Halkia, M., 2014. Ghsl application in europe: Towards
new population grids. European Forum For Geography And
Statistics, Krakow, Poland.
Grippa, T., 2018. Dakar population estimates at 100x100m spatial
resolution - grid layer - Dasymetric mapping.
He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning
for image recognition. Proceedings of the IEEE conference on
computer vision and pattern recognition, 770–778.
Tatem, A. J., 2017. WorldPop, open data for spatial demography.
Scientific data, 4(1), 1–4.
Thomson, D. R., Gaughan, A. E., Stevens, F. R., Yetman, G.,
Elias, P., Chen, R., 2021. Evaluating the Accuracy of Gridded
Population Estimates in Slums: A Case Study in Nigeria and
Kenya. Urban Science, 5(2), 48.
As the climate crisis becomes unignorable, it is imperative that new services are developed addressing not only the needs of customers but take into account its impact on the environment. Society is currently experiencing a green transition, which is revolutionizing business models, technology innovation and use, consumption and offering of applications, and sharing of knowledge involving both human and machine spheres.
The Telecommunication and Integrated Application (TIA) Directorate of ESA wishes to support the green transition through the Green Value and Sustainable Mobility (GVSM) Initiative.
Green Value is defined as low carbon, resource efficient and socially inclusive economy, pursuing knowledge and practices that can lead to more environmentally friendly and ecologically responsible decisions and lifestyles. This will help protect the environment and sustain its natural resources for current and future generations. Topics covered in the Green Value, include but not limited to: food production, energy transition, urban sustainability.
Sustainable Mobility, which is an unavoidable element of the green transition, refers to the broad subject of mobility that is sustainable in the sense of social, environmental and climate impacts as well as being economically. It includes the intelligent use of energy, digital technology and transport infrastructure for Land, Air, Sea and Rail transport. Sustainable mobility ensures that the transports systems meet society’s economic and social needs whilst minimising the impacts on the environment.
Each of the topics covered in the GVSM initiative, and their diverse markets, entail both commercial opportunities and technical challenges. The integration of innovative space and non-space digital technologies and infrastructures are required to optimally develop and deploy commercially sustainable solutions addressing the diversity of use cases. Satellite connectivity, including future 5G networks, and digital technologies such as Digital Twin, AI, Machine Learning and cloud-based applications are key enablers of the green transition and contribute to SDGs.
GVSM main objectives are:
• Support the emergence of green services leading to decarbonisation of the major Green House Gas (GHG) generating sectors (e.g. transport, energy, industry), establishing Space as part of a green ecosystem of users,
• Coordinate new public-private partnership projects (PPPs), developments (technology, products, systems, and applications) and deployment of solutions addressing EU Green Deal areas.
• Demonstrate the benefits of connectivity infrastructure (small-sat constellations, IoT, optical communication) as enabler of sustainable green services.
• Through demonstration and validation opportunities prove that space-based solutions can deliver innovative space-powered business propositions addressing the climate and environmental challenges, thus paving the way towards the deployment of operational systems.
• Assess the environmental “green” impact of developed systems, technologies and applications according to different indicators, including CO2 reduction.
GVSM activities will contribute to the objective of the ESA Agenda 2025 “Make Space for Europe”, and especially to the “Space for Green Future” accelerator, bringing forward the contribution that connectivity and integrated applications can make to support all sectors of the Green economy, while also stimulating and accelerating the growth of a competitive European Downstream and Upstream Industry.
The paper will describe how services which leverage on connectivity, space and digital technologies, covered in the GVSM framework, are pivotal for the decarbonisation and for delivering SDGs, such as clean water, affordable and clean energy and sustainable cities and communities. The ESA Business Applications Space Solutions (BASS) programme has already supported sustainable development-80 MEUR invested by Industry and National Delegations in green business applications. Additionally, other PPP initiatives have been pursued. The Iris Programme has set the initial steps towards reducing the environmental footprint of commercial aviation. Thanks to the implementation of 4D trajectory systems leveraging on satellite connectivity, a reduction of CO2 emissions of 10 tons/year is achievable (SDG #11 sustainable transport).
Land use reflects the needs and haves of societies, and land-use change (LUC) is the main manifestation of human-environment interactions. LUC is thus at the heart of many sustainable-development challenges globally, either with direct (zero hunger, climate action and life on land) or indirect influence (no poverty, good health and well-being, clean water, economic growth, sustainable habitation and peace and justice). Detailed spatiotemporal information on different socioeconomic and environmental facets of land use is needed to support monitoring the trajectories of land systems, for scenario modelling, and for various other applications in science, policy, and management. Given the inherent complexity of land systems, these applications put extensive requirements on LUC data in terms of their consistency, spatiotemporal and thematic scope and detail, quality-assurance, and fitness-for-purpose. These requirements are not met by existing LUC data products.
While more extensive and higher-quality LUC data are generally needed, distinct user-groups have specific data needs. For example, climate modelers may only require LUC data at moderate spatial resolutions but need the gridded numbers to add up to national FAO accounts to ensure interoperability with other global models, while on-the-ground interventions or (sub-)national-scale decision making depends more critically on spatially accurate LUC information at the finest-possible resolutions. Moreover, agro-ecological models need to rely on accurate crop suitability, while theory-building in land-system science needs LUC data with minimal built-in assumptions.
We will present a global land-use timeseries that is based on a modelling pipeline that addresses at its core consistency issues, includes extensive quality documentation and is build in a modular fashion to tailor output that is adapted to different downstream requirements. The dataset we will present is based on state-of-the-art remotely sensed information (integrating ESA CCI landcover with many other datasets), a vast database of harmonised national and sub-national agricultural census statistics, and millions of in-situ records of land-use observations collected from hundreds of individual sources, used both to determine suitability of the Earth’s land surface for different land-use classes and commodities and to enable rigorous validation of the final spatial patterns. All these information are used optimally according to their individual strengths to ensure a high degree of spatiotemporal and thematic consistency and the quality documentation allows downstream applications to make informed decisions about adequate use. The resulting global data products use a hierarchical classification scheme of land-use concepts that considers the complete terrestrial surface for allocation of all land-use classes, enabling full thematic completeness in downstream applications. The modelling pipeline is, moreover, build in a modular fashion that allows specification of model-runs that are adapted to specific downstream needs by simply changing input data and parameters, as well as enabling continuous updating as different/improved versions of input data become available.
We have developed these data prodcuts within the LUCKINet, which is an international collaborative network with a shared vision of integrating LUC knowledge and providing fit-for-purpose data products to multiple applications related to the SDGs. We envision a ‘socio-technological infrastructure’ of open-source tools and a growing number of contributors that build on our initial contribution to collectively further improve and apply this LUC information to help advance sustainable development.
One of the challenges for quantifying the pace of urban land use/land cover changes in rapidly urbanizing cities of Sub-Saharan Africa is the demarcation of real urban boundary. Furthermore, collected statistics are most of the time outdated or aggregated to large heterogeneous administrative entities, which are judged meaningless for assessing urban development pace and trajectory. To assess the Sustainable Development Goal 11, there is a need for timely and reliable data and tools to accurately monitor spatio-temporal patterns of urbanization and analyze land use consumption. Satellite based monitoring was deemed as a vital tool for regular monitoring for the changing urban environment. Advanced machine learning and Earth observation big data analytics are potential for accurately detecting and extracting urban areas. In this study, we developed a method for delineating built-up areas in Kigali, Rwanda using a U-Net based impervious surface extracted from multi-temporal Sentinel-2 imagery. We further analyzed the spatio-temporal land consumption in the last five years since 2016 using population statistics and newly delineated urban areas. The proposed methodology enhanced the extraction of real urbanized areas, which were previously aggregated to the boundary of large administrative entities. Since 2016, change in landscape spatial pattern was characterized by high land consumption rate mainly in Southern and Eastern parts of Kigali. Our results illustrate that urbanization scenario was characterized by infill, extension and leapfrogging. The framework proposed in the present study can be easily transferred to other Sub-Saharan Africa cities.
Key words: Sentinel-2 MSI, LULC classification, impervious surface, Land consumption, Kigali, Rwanda
The exponential growth of Earth Observation (EO) data provides an increasing number of opportunities to monitor climate-driven natural hazards which have a disproportionate impact in low-resource settings. Though the creation of analysis-ready data (ARD) has also proliferated, its use and adoption by governments in low-income countries and humanitarian organizations remains low. A key driver of this is lack of access to ARD in systems they have direct influence over to meet their needs. The World Food Programme’s (WFP) Climate and Earth Observation unit has developed a suite of tools to address including a forthcoming Open Data Cube deployment, and an open-source data visualization and analysis platform called PRISM.
PRISM is designed to improve utilization of the wealth of data available but not fully accessible to decision makers particularly in low-resource environments. This is especially true of Earth Observation data which typically requires specialized skills and technology infrastructure to make it useful for practitioners. PRISM is open-source software which has been developed by WFP since 2016 but with a major technology overhaul in 2020. Though the project is led by WFP, as open-source software it is open for collaboration and use by anyone.
The objectives of PRISM are to provide greater access to data on hazards, particularly those generated from Earth observation data; to bring together various components of risk and impact analysis in a single system; to complement data from remote sensing with field data; and to provide tools to governments and local partners that foster local ownership and utilization of data for decision-making particularly related to disaster risk reduction and climate-resilience. PRISM simplifies the integration of geospatial data on hazards such as droughts, floods, tropical storms, and earthquakes, along with information on socioeconomic vulnerability. It is provided to governments and humanitarian agencies as a free solution which can be easily adapted to local needs. PRISM combines data from these various sources to rapidly present decision makers with actionable information on vulnerable populations exposed to hazards, allowing them to prioritize assistance to those most in need.
With these objectives, and as a form of technical assistance to governments in low and middle income countries, PRISM contributes to SDGs 1 – No poverty, 2 – Zero hunger, 11 – Sustainable cities and communities, 13 – Climate action, and 17 – Partnerships for the goals. The platform facilitates climate risk monitoring and helps to focus attention on the most vulnerable populations. This geographic targeting is used by governments and humanitarian agencies to protect people living in poverty, and to prevent those living just above the poverty line from falling below poverty due to a climate-driven disaster, contributing to target 1.5 (build the resilience of the poor and those in vulnerable situations and reduce their exposure and vulnerability to climate-related extreme events and other economic, social and environmental shocks and disasters). PRISM has broad relevance for SDG 2. Extreme weather and climate change not only increase the risk of food insecurity among affected farmers, but also the broader food system. Droughts in particular can severely impact the production of key staple commodities, driving up food prices and contributing to food insecurity. As a monitoring system, PRISM provides insights into the extent and severity of these hazards as a tool for decision makers to reduce food insecurity.
Within SDG 11, target 11.5 aims to reduce economic losses from disasters, with a focus on protecting the poor in vulnerable situations. As a tool for geographic targeting based on vulnerability and hazard exposure, PRISM is used for disaster risk reduction by governments and partners to assist those most in need with adaptive social protection programs and early actions. Within SDG 13, target 13.1 seeks to strengthen resilience and adaptive capacity to climate-related hazards. PRISM is deployed as a form of technical assistance and capacity development in countries highly exposed to climate hazards, offering a platform to monitor extreme weather and implement well-targeted disaster risk reduction activities. This focus on capacity-building is a key strategic element of PRISM deployments where WFP’s country offices support national plans to achieve the SDGs, technical assistance and facilitation of South-South cooperation – contributing to SDG 17.
Configuration of the PRISM dashboard requires no coding experience - minimizing the need for niche software development and ITC infrastructure skills to support the application. The dashboard is built on common modern frameworks for web software development. It uses geospatial standards set through the Open Geospatial Consortium (OGC) to maximize interoperability with other systems and to ensure its longevity.
As PRISM requires external data as part of the deployment process, it is closely related to the forthcoming deployment of WFP’s global instance of the Open Data Cube platform. WFP’s Open Data Cube deployment provides climate monitoring data across more than 80 countries globally and is easily integrated into PRISM deployments. PRISM has also been configured to integrate data from other Open Data Cube deployments – providing a quick tool to display time-series raster data in an interactive dashboard. PRISM also integrates data from WFP’s related system – ADAM (Automatic Disaster Analysis & Mapping) which provides near real-time data on earthquakes, tropical storms, and soon floods.
PRISM follows the Intergovernmental Panel on Climate Change (IPCC) disaster framework, where risk and impact are the intersection of hazard, exposure, and vulnerability. As such PRISM, can support decision-making at the national level at various stages of the disaster management cycle, notably - preparedness and response at the sub-national level.
As PRISM facilitates the use of geospatial data over time, it can highlight areas and populations repeatedly exposed to hazards. In addition, hazard frequency products generated through various processes can also be easily integrated into PRISM for additional analysis. These are used by WFP and partners to highlight areas with repeated exposure to hazards to concentrate preparedness activities.
Recently, the project has completed integration of data collected on mobile devices using KoBo Toolbox – a free and open-source field data collection tool developed by the Harvard Humanitarian Initiative with wide adoption across the humanitarian and development sectors. This integration allows data collected in the field to be visualized alongside PRISM’s other data sources in real-time.
When a disaster is unfolding, PRISM provides information on the geographic extent and severity of a hazard from satellite products. By combining the birds-eye view provided from satellite imagery with data collected from the field, PRISM provides real-time information can rapidly inform response activities.
While PRISM has thus far focused on disaster risk reduction, the platform can be applied to multiple use cases where analysis ready EO data is available but not yet in the hands of national institutions. PRISM fills an important gap in achieving the SDGs by enhancing national systems with EO data and providing a clear path to local ownership so that countries can leverage this data for more informed decision making contributing to multiple SDGs.
The 2030 Agenda for Sustainable Development, subscribed by all the United Nations Member States in 2015, provides a shared blueprint for peace and prosperity for people and the planet. The agenda lists 17 goals, the Sustainable Development Goals (SDGs), which state a path to be followed by all the countries within 2030 for the global development. Earth orbiting satellites and especially Low Earth Orbit (LEO) satellites lie in a privileged location to monitor our planet. This allows Earth Observation (EO) missions to contribute to the achievement of the SDGs, as extensively recognised by both space agencies and the UN.
In this paper a new methodology is presented to provide agencies, governments, and stakeholders a
tool to assess the societal benefits of EO missions. The aim of the proposed approach is to quantify the social value rating of the missions through the achievement of the SDGs. For this purpose, nine Services provided to Earth by EO missions are identified: Built-up land (i.e. all kinds of man-made constructions), Agriculture, Wild nature, Geology, Limnology, Oceanography, Meteorology, Air Quality
Monitoring and Hazards Monitoring. The evaluation of the social benefits is carried out introducing four indices relating satellite payloads to these Services, which are linked to the SDGs.
The four indices focus on the payload’s temporal resolution, spatial resolution, spectral efficiency and Earth coverage.
The proposed model is applied to the Copernicus program, in order to assess its contribution to the achievement of the SDG2030.
The ARICA project is about “a multi-directional Analysis of Refugee/IDP (Internally Displaced Persons) CAmp areas based on HR/VHR satellite data” with the aim to better understand the mutual influence between the environment and refugee/IDP camp inhabitants. The overall goal is to investigate how satellite data could support the management of such camps during their whole life cycle to improve and secure living condition, as well as reduce environmental impact. Four large camps in Africa, Asia and the Middle East have been observed with radar and/or optical satellite time-series from Sentinel-1 (S1) and Sentinel-2 (S2): (1) the Mtendeli Refugee Camp in Tanzania (see Figure) that opened in 2016 and is planned to be closed in the near future, (2) the IFO-2 camp in Kenia that was closed in May 2018, (3) the Khanke IDP camp in Iraq that opened in 2014 and (4) the currently World’s largest Kutupalong Refugee camp in Bangladesh hosting more than 600,000 Rohingyas that fled Myanmar and especially since 2017. S1 and S2 time series have been used to map land cover and land cover change in the surroundings of the camps, indicating forest loss during and since the installation of the camp, changes in agricultural areas, as well as revegetation after closure. Such forest observations can be compared with available products from the Global Forest Change program and put into the historic context. The area and evolution in size of the camp can be estimated and combined with single dwelling observation from very-high-resolution satellite data. Natural hazards like floods, landslides and drought can potentially be observed and mapped for coordinating emergency measures. The satellite observations are combined and associated with information collected through interviews with camp residents and stakeholder like NGOs, UNHCR, etc. The mutual relation of refugee/IDPs settlements and natural environment will be highlighted in a socio-geographical analysis of the project, resulting in the determination of the most important factor of the camp inhabitant's activity which are the drivers behind the environmental changes observed by satellite. Results of the ARICA project will be made available through a dedicated open geo-platform. The presentation will give an overview of the current state of the ARICA project and present preliminary results.
The United Nations had proclaimed 2015 as the International Year of Soil, thus emphatically emphasizing the importance of soil protection and its sustainable management as the basis for food security, safeguarding ecosystem functions and sustainable climate protection worldwide. The Agenda 2030 Sustainable Development Goal (SDG) 15 underlines the urgent need to “protect, restore and promote sustainable use of terrestrial ecosystems, sustainably manage forests, combat desertification, and halt and reverse land degradation and halt biodiversity loss” (United Nations General Assembly, 2015). In the National Sustainability Development Strategy (Bundesregierung, 2016), the German government has explicitly listed the goal of “halting soil degradation" and has included the "protection and sustainable use of soil as a resource" in its catalogue of measures (SGD 15/I.3). However, despite the vital functions and central importance of soil, it is estimated that around 24 billion tons of fertile soil are lost every year due to improper use (Soil Atlas, 2015). In Germany, it is about 56 hectares of soil every day whose soil functions are completely or partially damaged (Statistisches Bundesamt (Destatis), 2019). According to the European Environment Agency (EEA) and the Commission, soil erosion, land use by settlement and transport, and material pollution are the main problems for the loss of soil and the impairment of soil functions (e.g., Panagos et al., 2015).
At present, data on land use and development is provided on the basis of the Agricultural Statistics Act (AgrStatG). The smallest survey unit is the municipality with a regular survey frequency of four years (excluding settlement and transport areas). A precise spatial location and assessment of soil or soil function losses with regard to its most important function today, food production, is currently not available; the loss of valuable soil in terms of fertility and yield capacity cannot yet be quantified and thus cannot be controlled.
Earth observation missions such as Landsat or the Copernicus Sentinels can provide information on the condition and properties of soils, and on the type, the intensity and the development of land use, nationwide and with high spatial resolution (Rogge, et al., 2018; Preidl, et al., 2020). Together with existing geodata (e.g., terrain models, soil maps, climate and weather data) this opens up new opportunities for a spatially explicit recording and evaluation of soil loss in support of a sustainable development.
Facing this background, the SOIL-DE project aims to provide improved nation-wide indicators on the functionality, the yield capacity, the land use intensity and the vulnerability of agricultural soils. To accomplish that challenging task, historical satellite data from the LANDSAT archive (1984-2014), Sentinel-2 satellite data from European Copernicus Program, and the European LUCAS soil data base were explored in order to derive information on soil parameters such as soil organic carbon on the one hand (Zepp et al., 2021). On the other hand, a set of six functions and potentials of landscape’s ecosystem capacity were selected and derived according to Marks et al. (1992). These include the biotic yield potential, the erosion resistance function to water and wind, the flow regulation function, and the physical-chemical and mechanical filter function. Further, the Muencheberg Soil Quality Rating (Mueller et al., 2007) was applied. Functions and potentials were parameterized using official soil data from the German Soil Survey 1:200.000 (BÜK200), remote sensing data products on land use and land use intensity, digital elevation model, and climatic data.
Currently, a framework is set up to combine these indicators to a comprehensive high-resolution soil quality index of German soils under agriculture. On that basis, and for the first time, soil loss may be evaluated quantitatively and qualitatively. This will be achieved by using remote sensing-based information on land cover change (e.g., Corine Land Cover (CLC) change).
All data layers and products are made freely available to authorities, planners, and the public via Webservices in the SOIL-DE Viewer. Its flexible layout automatically adapts to different devices including personal computer, tablets or smartphones.
References:
Bundesregierung (2016) Deutsche Nachhaltigkeitsstrategie, 256 S.[online] https://www.bundesregierung.de/ Webs/Breg/DE/Themen/Nachhaltigkeitsstrategie/1-die-deutsche-nachhaltigkeitsstrategie/nachhaltigkeitsstrategie /node.html, zitiert am 21.03.2017.
Marks, R., Müller, M., Leser, H., Klink H.-J., 1992. Anleitung zur Bewertung des Leistungsvermögens des Landschaftshaushaltes (BA LVL). Zentralauschuss für deutsche Landeskunde, Selbstverlag, Trier.
Panagos, P., Borrelli, P., Poesen, J., Ballabio, C., Lugato, E., Meusburger, K., Montanarella, L., Alewell, C. (2015) The new assessment of soil loss by water erosion in Europe, Environmental Science & Policy, 54, 438-447, ISSN 1462-9011, https://doi.org/10.1016/j.envsci.2015.08.012.
Preidl, S., Lange, M., Doktor, D. (2020) Introducing APiC for regionalised land cover mapping on the national scale using Sentinel-2A imagery, Remote Sensing of Environment, Volume 240, Article 111673, DOI: 10.1016/j.rse.2020.111673
Rogge, D. Bauer, A., Zeidler, J., Mueller, A., Esch, T., Heiden, U. (2018) Building an exposed soil composite processor (SCMaP) for mapping spatial and temporal characteristics of soils with Landsat imagery (1984–2014), Remote Sensing of Environment, 205, 1-17, ISSN 0034-4257, https://doi.org/10.1016/j.rse.2017.11.004.
Statistisches Bundesamt (Destatis), 2019. Bodenfläche nach Art der tatsächlichen Nutzung. Fachserie 3 Reihe 5.1
United Nations General Assembly, 2015. Resolution adopted by the General Assembly on 25 September 2015: 70/1. Transforming our world: the 2030 Agenda for Sustainable Development. Seventieth session, Agenda items 15 and 116, A/RES/70/1.
Zepp, S., Heiden, U., Bachmann, M., Wiesmeier, M., Steininger, M., van Wesemael, B. (2021) Estimation of soil organic carbon contents in croplands of Bavaria from SCMaP soil reflectance composites. Remote Sensing 13 (16), 3141, 1–25 (ISSN 2072-4292)
COVID-19 disrupted supply chains throughout the agricultural industry, impacting production and harvest, policy, markets and trade, shipping infrastructure, researchers’ ability to conduct fieldwork and meet with farmers, and more. These supply chain disruptions directly affect the world’s ability to address the Sustainable Development Goal 2: End Hunger, as food availability can be viewed in part as a global logistics and policy issue rather than solely purely a food production issue. Recognizing the unique challenges brought on by the pandemic and the capacity of Earth observations data to fill unanticipated knowledge gaps – particularly those related to food supply and field-specific observations – NASA identified several projects to leverage existing resources to improve the availability of relevant data.
This presentation introduces two such initiatives taken under the NASA Harvest Program for food security and agriculture. First, the NASA Harvest COVID-19 Dashboard for Agriculture was developed under NASA’s Rapid Response element to provide access to and visualization capabilities for data related to the spread of COVID-19, crop conditions and production, food security, macroeconomic variables, and markets and trade. The novel collection of these datasets includes COVID-19 case counts and vaccination rates, inter-country travel restrictions, GEOGLAM Crop Monitor crop condition reports, vegetation indices, food security indices, historical trade indices, and remotely sensed weather variables.
Efforts taken under the NASA Harvest COVID-19 Dashboard contributed in part to a new interdisciplinary research initiative, “Agricultural Supply Chains and Food Security in the COVID-19 World,” focused on providing timely, operational insight into food supply chains and the food system as a whole. Especially relevant is the integration of multiple sources of agricultural supply chains data from a geospatial perspective that includes geographic information systems (GIS), remote sensing, and economic modeling. Under this program, novel economic indicators based on new commercial datasets have been designed and combined with satellite data to inform global economic and food crisis models that highlight possible agricultural production and consumption trajectories. The context for these datasets is being defined by developing tools that enable the visualization of semantic webs and knowledge graphs, which are augmented via natural language processing workflows.
These programs aim to increase the operational preparedness of the agricultural monitoring community by aggregating all of the available relevant data in one location. Data for both systems are made available through the NASA Harvest Portal for data discovery and download, as well as programmatically via RESTful API’s that provide access to the data as geospatial image services and in other common data formats.
Inland excess water (IEW) is a type of flood where large flat areas are covered with water during a period of several weeks to months. In areas with limited runoff, infiltration and evaporation the superfluous water remains on the surface. Local environmental factors, like agricultural practices, relative relief differences and soil characteristics, play an important role in the development of IEW, which can cause severe water management problems but also provide opportunities to reduce water scarcity. In Hungary, on average, every year 110 000 ha of land is covered with IEW, but much larger inundations have also been reported (e.g., 445 000 ha in 1999 and 355 000 in 2011), resulting in serious financial, environmental and social problems and costs. One of the potential integrated and sustainable solutions to the inland excess water problem is to store the surplus water in agricultural areas for later periods of drought or to allow the water to remain on areas designated as (temporary) wetlands, supporting ecosystem restoration. For such complex water management, it is important to understand where IEW develops. Before it is possible to take action, it is necessary to understand the phenomenon and identify the factors and processes that cause the formation of inland excess water. Also, it is necessary to determine the location and size of the inundations to be able to plan storage possibilities or take operative measures to mitigate and prevent damage. When the locations and duration are monitored continuously, it may be possible to forecast the locations, size and duration of IEW in the future and to develop preventive policies or determine how the surplus water can be used sustainably.
Four major approaches to map and monitor IEW can be identified. (1) The oldest approach is visual observation of inland excess water patches. This is labour intensive and can easily lead to errors due to misinterpretation and differences in observation methodology. (2) Aggregating the field observation maps over time can be useful to create maps showing the vulnerability to inland excess water floods. This approach is useful to identify hazardous areas but cannot be used for operation intervention. (3) Modelling of inland excess water has been performed using hydrological modelling packages as well, but it requires large amounts of accurate input data and extended computational power. It can usually not be performed on large areas. (4) Mapping and monitoring IEW based on remote sensing data and algorithms. Satellite based monitoring provides the opportunity to detect IEW over large areas with high temporal and spatial resolution, and to standardize and automate the analysis.
This research presents and validates a new methodology to determine the extent of the floods using a combination of passive and active remote sensing data. The method can be used to monitor IEW over large areas in a fully automated way based on freely available Sentinel-1 and Sentinel-2 remote sensing imagery. Currently, we determine the extent of IEW for 12 adjacent Sentinel-2 tiles in Hungary on a weekly basis. The large number of Sentinel-1 and Sentinel-2 satellite images and their derived IEW maps require a large amount of disk space. The processing of the images to IEW maps has been automated and is performed on a daily basis to reduce the demand for resources. To further prevent unnecessary data and calculation resources, we only calculate the maps during the IEW period, which is usually from February to April, although due to increased rainfall in 2021 also for May and June IEW maps were calculated.
Our method was validated during different IEW periods using very high-resolution optical satellite data and aerial photographs. Compared to earlier remote sensing data-based methods, our method can be applied under unfavourable weather conditions, does not need human interaction and gives accurate results for inundations larger than 1000 m2. The overall accuracy of the classification exceeds 90%; however, smaller IEW patches are underestimated due to the spatial resolution of the input data.
The continuous monitoring of the inundations results in a large number of maps showing where and how IEW develops. The individual maps can be combined to frequency maps to support sustainable water management by revealing drier and wetter regions in the area. This can help to plan storage areas for surplus water and reduce water scarcity.
It is estimated that over 445 thermal power stations worldwide use sea water for colling, and over 1447 use freshwater from rivers or lakes (Vliet et al. 2016). Water is abstracted from one location and after being heated by the condensers, it is returned to a different location to avoid recirculation. For a large nuclear or fossil fuel power station this can correspond to over 100 m3s-1 of water at a temperature 11°C above the intake temperature. The colling water plume impacts the aquatic environment through the heat content, raising metabolism and biological stress, and through the by-products of chlorination and other chemicals used to control biofouling. With warming sea temperature due to climate change so does the output temperature increase, which raises its environmental impact and can breach national regulations. But monitoring of the environmental impact is costly due to temporal and spatial variation of the plume, in particular in tidal estuaries or coasts.
In this study we demonstrate and validate the use of LANDSAT 8 TIRS to observe and monitor power stations discharges around the UK for two nuclear power stations with Gigawatt capacity. By building a time series of plume location and intensity since 2013 it is possible to characterise and monitor changes to the impacted areas, in particular for intertidal mudflats which can have high environmental value. Two atmospheric correction methods were tested and validated against in situ observations from the UK wavenet network of surface buoys: split-window and a single-band radiative transfer correction. The split-window method (Du et al., 2015) does not require the knowledge of the water vapour content in the atmosphere from a nearby weather station or from an atmospheric model (e.g. Rozenstein et al., 2014) and rellies relies solely on the image data. The single-band method uses Band 10 to avoid the partially corrected stray light problems that affect Band 11 more acutely (Gerace and Montanaro 2017). It relies on an optical description of the atmosphere from combining the NCEP atmospheric model and MODTRANS radiative transfer model (Barsi et al. 2014). Validation for 5 locations along the Bristol Channel yielded RMSE errors of 0.55°C for split-window and 0.61°C for the radiative transfer method making both suitable for environmental monitoring as well as providing information to power stations on the occurrence of recirculation, which has important economic and operational impact on energy producers. The major limitation of this method is the 15 days revisit time of the platform, but in sight of both commercial and national agencies plans to launch high resolution thermal sensors this is shown as a cost-effective solution to environmental monitoring at a time when marine discharges will have increased impacts.
References:
Barsi, J., Schott, J., Hook, S., Raqueno, N., Markham, B., Radocinski, R., 2014. Landsat-8 Thermal Infrared Sensor (TIRS) Vicarious Radiometric Calibration. Remote Sens. 6, 11607–11626. https://doi.org/10.3390/rs61111607
Du, C., Ren, H., Qin, Q., Meng, J., & Zhao, S. (2015). A practical split-window algorithm for estimating land surface temperature from Landsat 8 data. Remote Sensing, v. 7, n. 1, p. 647–665. DOI:10.3390/rs70100647
Gerace, A., Montanaro, M., 2017. Derivation and validation of the stray light correction algorithm for the thermal infrared sensor onboard Landsat 8. Remote Sens. Environ. 191, 246–257. https://doi.org/10.1016/j.rse.2017.01.029
Ren, H.; Du, C.; Liu, R.; Qin, Q.; Yan, G.; Li, Z.; Meng, J. (2015). Atmospheric water vapor retrieval from Landsat 8 thermal infrared images. Journal of Geophysical Research: Atmospheres, n. 120, p. 1723–1738, DOI.org/10.1002/2014JD022619
Rozenstein, O.; Qin, Z.; Derimian, Y.; Karnieli, A. (2014). Derivation of land surface temperature for landsat-8 TIRS using a split window algorithm. Sensors, v. 14, n. 4, p. 5768–5780. DOI.org/10.3390/s140405768
van Vliet, M., Wiberg, D., Leduc, S. et al. Power-generation system vulnerability and adaptation to changes in climate and water resources. Nature Clim Change 6, 375–380 (2016). https://doi.org/10.1038/nclimate2903
Water management associations, suppliers and municipalities face new challenges due to impacts of climate changes and the ongoing intensification of agriculture effect in increased material inputs in watercourses and dams. As an important task there is also the prediction of changes of the water quality and other hydrological aspects. On the other side due to the evolvement of the Copernicus satellite platforms, the broader availability of satellite data provides a great potential for deriving valuable, complementary information from Earth Observation data, that contributes to a detailed understanding of hydrological processes. Although the number of satellite data platforms that provide online processing environments is growing, it is still a big challenge to integrate those platforms into traditional workflows of users from environmental domains such as hydrology. EFTAS had the opportunity to participate in two R&D-projects in this field which were finished within the past 12 month: WaCoDiS and MuDak -WRM. Although both projects worked in the field of water management and the question how remote sensing can facilitate the different tasks, the projects focused on different aspects. The project WaCoDiS focused on the tasks of a particular water management association (Wupperverband) and their continuous tasks and the question how remote sensing methods are able to facilitate these tasks with the aim to reduce costs or improve the quality of the results. On the other side the project MuDak -WRM focused on the question to develop a model as simple as possible for predicting mid- to long-term changes in the water quality of reservoirs and connect them with remote sensing data. Where in the project WaCoDiS the aim was to focus on the special region of the water management association, in MuDak -WRM the aim was to develop methods and models which are transferable to all regions worldwide. During the projects different remote sensing datasets and a processing platform were created, mostly based on Sentinel 1 and Sentinel-2 images, tailored to the hydrological needs. We found out that it was possible to facilitate and improve the different water management tasks. In the presentation we will show the particular tasks and our solutions to these tasks together with an overview of the advantages and disadvantages of them.
Across the globe, as one of the repercussions of climate change and global warming, several new glacial lakes have formed in the previously glaciated areas. In addition, the area of many existing glacial lakes is on the rise. Prior research showed that rapid deglaciation and lake formation have dramatic effects on downstream ecosystem services, hydropower production and high-alpine hazard assessments. However, this extraordinary environmental change is currently only a side note in the perception of climate change impacts, second, for example, to the widely discussed loss of glaciers and permafrost. Glacier lake inventories are increasingly becoming available for high-alpine areas and Greenland, but it is essential to map and monitor the changes in water extent in these lakes at a higher frequency for hazard assessment and Glacial Lake Outburst Flood (GLOF) risk estimation.
There are several underlying challenges to perform mapping and monitoring of the glacial lakes from space using optical and Synthetic Aperture Radar (SAR) satellite sensors. Most of these lakes are very small in area, and frozen for a large part of the year, making the mapping using satellite sensors challenging. Additionally, observing such lakes using optical satellite imagery such as Sentinel-2 becomes challenging due to the inability of the sensor to penetrate clouds. Moreover, cast and cloud shadows, increasing lake and atmospheric turbidity pose further hurdles that need to be tackled. On the other hand, for monitoring using SAR satellite sensors (e.g. Sentinel-1 SAR), handling natural variations in backscattering from water surfaces and cast shadows are the main difficulties. To overcome the above-mentioned challenges, we propose to fuse the complementary information from optical and SAR imagery using a deep learning approach (with Convolutional Neural Network backbone). The input data include: Sentinel-2 L2A and Sentinel-1 SAR satellite imagery. The aim is to perform a decision-level fusion of information from the two input heterogeneous satellite data targeting to leverage the advantages of both sensors by relying on a data-driven, bottom-up methodology. Our target is to produce geolocated maps of the target regions where the proposed deep learning methodology classifies each pixel either as lake or background.
This work is part of two major projects: ESA AlpGlacier project that targets mapping and monitoring of the glacial lakes in the Swiss (and European) Alps, and the UNESCO (Adaptation Fund) GLOFCA project aiming to reduce the vulnerabilities of populations in the Central Asia (Kazakhstan, Tajikistan, Uzbekistan, Kyrgyzstan) region from GLOFs in a changing climate. Various regions in Central Asia find it challenging to cope with the drastic effects of climate change, especially the impacts on water-related disasters. Prior research (2009) by the World Bank concluded that Tajikistan and Uzbekistan are highly sensitive to climate change in the entire Central Asian region. Socially and economically underprivileged, indigenous populations, ethnic groups, women, children and elderly are especially vulnerable to the impacts of global warming, as adaptation and disaster risk management capacities are typically low in these regions. One of the major outcomes of climate change in Central Asia is melting of the glaciers which trigger the formation of new glacial lakes. As part of the GLOFCA project, we aim to develop a toolbox for mapping and monitoring the glacial lakes in the target regions in Central Asia.
The present study focusses on the improvement of crisis preparedness for fragile states community resilience through the provisioning of an early warning decision support system (DSS), at pre-operational level, able to provide, in a timely manner, critical geointelligence information enhanced through the combination of complementary sources of information.
The recent new capabilities introduced in Earth Observation concerning spatial and, above all, temporal resolutions have dramatically enhanced the range of useful applications addressable with spaceborne sensors, opening the stage to new solutions and opportunities not even thinkable just few years ago. In addition to this, the availability of new technologies improving the management and exploitation of large volume of datasets allowed the design and development of automatic information extraction pipelines allowing the definition of new geospatial indicators and “signals” improving situational awareness, area evolution and human ground-activities monitoring.
Finally, the combination of non-EO data plays an essential role in the final products enrichment (e.g. integration of context related details) and, above all, advanced exploitation (e.g. triggering satellites data collection or focusing properly remotely sensed data analysis): social network, news and media feeds, Country Profiles providing political, social and economic context.
The proposed study refers to the monitoring of the Grand Ethiopian Renaissance Dam (GERD) basin filling and it aims at showing how EO-based products combined with non-EO data can provide key indicators and early warning concerning both on-going activities monitoring, future evolution forecast and potential impact on the socio-economic stability of the involved Countries.
The dam is located in Ethiopia’s Benishangul-Gumuz region, 45 km east of the border with Sudan and sits on the Blue Nile, the main supplier of the Nile River (up to 86%). The potential consequences on area instability of such controversial construction project started in 2011 are quite straightforward:
- Sudan and Egypt fear that the GERD will reduce the amount of water available to them from the Nile and therefore introduce potential effects on countries agricultural production (both due to the water amount reduction and, in Egypt area, the consequent rise in sea level which will increase salinity and studies preliminary estimated a loss of up to 15% of arable land in the delta area).
- The DAM is designed to generate about 6,000 MW of which up to 5,000 MW are planned to be exported to other African states within 10 years thus introducing a noticeable change on the energy export context, and so economy, of Africa.
- The area is already in critical conditions: several conflicts have been reported on the border area within Ethiopia and Sudan as well as the presence of insurgents groups in Ethiopia suspected to be receiving covert supports from neighbouring countries.
The GERD construction started in 2011 and the operational plan is aggressive: fill it in 5-6 years (while Egypt was demanding to perform this in no less than 12 years), start soon the power production (initial reports say around August 2021) and provide, through challenging connectivity targets, energy to the vast amount of rural homes that are currently without supply.
The main concern is therefore on the dam status and evolution (e.g. status, filling rate and the associated operational start of the power production) as well as the associated estimation of potential future impacts on agricultural activities and, above all, involved countries stability.
In the present study the EO-products focus on GERD basin evolution quantitative assessment. As emerged from open-source information analysis outcomes, the key information is represented by the GERD filling rate and current level. Aiming at the estimation of these fundamental parameters, a multi-temporal assessment on GERD filling activities was performed leveraging on Copernicus Sentinel-1 constellation and USGS SRTM-30m digital elevation model data.
As a result, being the 30 meters posting SRTM DEM obtained through data collected before impounding, the refined extent of water extracted from the Copernicus Sentinel-1 amplitude allowed to automatically estimate the basin water height, both absolute and relative w.r.t. terrain level, and volume. Applying this methodology on an extended subset of the full Copernicus Sentinel-1 time-series it was then possible to estimate also the dam basin filling trend.
Taken into consideration the accuracy of source data (Copernicus Sentinel-1 10m spatial resolution (and SRTM 30m accuracy), and coupling such measurements with the above dam operational parameters, it is possible to provide relevant insights to increase situational awareness.
Soil moisture is one of the most important parameter in research on the condition of agricultural land, including meadows. Soil moisture affects almost all its physical and biochemical properties, as well as its microbiological activity.
The aim of this study was to provide information in the form of maps, charts, and a description of changes of soil moisture of agricultural areas, with particular emphasis on meadow areas, for the Masovian Voivodeship (NUTS2). For this purpose, the data of the Copernicus programme was used. Data were collected and compiled for the entire voivodeship and districts. Built-up areas, forest and water areas were excluded from the analysis Surface Soil Moisture (SSM), provided by the LAND service of the Copernicus programme. SSM Model has been performed from Sentinel-1 satellite. It is based on the VV backscattering data. The spatial resolution is 1km x 1km. Moreover, in order to collect precipitation data, the data derived by European Centre for Medium-Range Weather Forecasts (ECMWF) by ERA5 reanalysis model was used.
The analysis of the spatial distribution of moisture content showed that the Masovian Voivodeship is diverse in terms of soil moisture (differences c.a. 20% between counties). The most exposed to the low soil moisture are counties at the western and eastern part of the voivodeship. The highest soil moisture was observed over urban areas. It is supposed that agricultural areas within the urban areas are often meadows, which, as a rule, are distinguished by high soil moisture. The analyses for each month and week showed that 2021 was the year of the highest soil moisture, both in total agricultural areas and in meadow areas. In each district, the average soil moisture was much higher than in the previous years. The year 2019, in turn, was characterized by extremely low values of soil moisture. On the other hand, in meadow areas, soil moisture in some counties was lower in 2020 than in 2019. This was not the case for the total agricultural area, what suggests that water in the meadow areas behaves and stores different.
Rainfall had a significant impact on soil moisture in specific years. In 2021, precipitation was the highest, and in 2019 the lowest, which corresponds to the soil moisture. This impact was particularly evident in April, when the accumulated precipitation in 2019 and 2020 was very low (i.e lower than 5 mm while in 2021 the precipitation was high i.e 50 mm). Therefore, April was the month of the greatest disproportions in the average soil moisture over Masovian voivodeship between 2019-2020 and 2021 (c.a. 25% in 2019, c.a. 20% in 2020 and almost 60% in 2021). In May and June, when accumulated precipitation was similar, soil moisture was similar in each year, (May: c.a. 40% in 2019, c.a. 35% in 2020 and c.a. 50% in 2021; June: c.a. 50% in each year). In August, when the disproportions in precipitation were very high again (c.a. 60 mm in 2019 and 2020, c.a. 160 mm in 2021), soil moisture in 2021 (70%) was much higher than in 2019 (50%) and 2020 (50%). Furthermore, it was noticed that during rainy days and a day after rain days, the soil moisture was unnatural high (close to 90% over arable lands). It is supposed that model, could overestimate the soil moisture. However, relative differences between observations seems to be fitted to actual soil moisture obtained from SSM model. Therefore, model response well to the changes in soil moisture and detects well, the increase and the decrease of this parameter.
The results of the research on soil moisture shows that the model could be applied to restore the natural features of the environment of meadow areas. It is important because grasslands ecosystems play a very important role in the natural environment due to its influence on formation of the micro and macroclimate, regulate the water balance in catchments, and protect the soil against water and wind erosion.
To sum up, significant differences in spatio-temporal distribution of soil moisture over Masovian voivodeship have been observed. 2021 was the year when soil moisture was the highest while in 2019 soil moisture was the lowest. It strictly corresponds with accumulated precipitation. Moreover, the research revealed that the average weekly data, obtained from Copernicus services, for counties seem to be the most useful in monitoring perspective. Thanks to this method, it is possible to detect breakthroughs in the seasonal variability of soil moisture as well as space differences.
The research work was conducted within the project financed by the European Space Agency, titled “Development of Standardized Practices for Monitoring Landscape Values and Natural Capital Assessment (MONiCA)”. The end user of the project is the Mazovian Voivodship Office.
Climate change has greatly altered the occurrence of extreme events such as droughts, floods and wildfires in the past years. Dire consequences of intense drought have been affecting dryland crop yield production. Some of the areas (like the Mediterranean and the Sahel) have been shown to be more prone to climate change and thus droughts and their consequences are expected to exacerbate in the future. Soil moisture (SM) data was shown to be key in the detection of early onset drought. Current drought observation warning systems, such as the European Drought Observatory, the Global Drought Observatory, or the U.S. Drought Monitor offer maps of a combined drought index, derived from different data sources (meteorological and satellite measurements and models). SM anomaly is acknowledged to be a good metric for drought and consequently all the global Drought Observatories include remote sensing (RS) SM but at a low spatial resolution. Consequently, regional drought events are frequently not captured or their intensity is not fully pictured.
In order to detect the onset of crop water stress and to trigger irrigations to mitigate the effects of potential droughts, in situ SM measurements are used by modern irrigators. Unfortunately, they are costly; combined with the fact that they are available only over small areas and that they might not be representative at the field scale, remote sensing is a cost-effective approach for mapping and monitoring extended areas.
This study focuses on a new pilot project which has been implemented over two areas located in the Tarragona province of Catalonia, Spain, whose main aim is to help resilient irrigation practices by offering advice based on drought indices. For this purpose, spatialized drought indices at high (1 km) resolution from remotely sensed SM are derived on a weekly basis. These indices are then used to provide irrigation recommendations to farmers, which have recently switched from dryland crops to vineyards.
Different indices, such as the Palmer Drought Severity Index, the Crop Moisture Index, the Standardized Precipitation Index or the soil moisture deficit index (SMDI) have been developed in literature in order to provide insight on agricultural drought monitoring and forecasting. Most of the existing well-known drought indices have been developed in conjunction with hydrological and meteorological models, i.e., use parameters such as rainfall, evapotranspiration, run-off and other indicators derived from models in order to give a comprehensive picture for decision-making. When used in conjunction with remote sensing-derived parameters, certain artefacts can appear in the drought indices, brought about by the high variability of remotely sensed data in comparison with model data. More specifically, the presence of outliers can have a high impact on the remote-sensing derived drought indices. This study has focused on analysing the presence and the impact of such outliers in the computation of SMDI. High resolution (1 km) SMOS (Soil Moisture and Ocean Salinity) and SMAP (Soil Moisture Active Passive) SM were first derived by using the DISPATCH (DISaggregation based on a Physical and Theoretical scale CHange) methodology. Furthermore, high-resolution root zone soil moisture (RZSM) products were then derived from the 1 km surface SM (SSM) by applying a recursive formulation of an exponential filter. Both SSM and RZSM were consequently used in order to derive SMDI representative of both the surface and root zone layer, on a weekly basis, for a period spanning 2010-2021, for the two areas of the above-mentioned pilot project. In the computation of SMDI for a certain week belonging to a certain month, the historical maximum, minimum and median of the month in question are used. The presence of outliers in the historical maximum and minimum have been identified, after a close inspection of the estimated SMDI using the original definition. The outliers are in line with the nature of the sensor used to measure SM remotely, which naturally is more noisy than the in situ sensors. Therefore, a new strategy has been developed, which uses percentiles in order to compute values corresponding to a “maximum” and a “minimum”, which are not affected by the outliers. Results have shown that by using percentiles instead of directly the maximum and minimum values, the artefacts present in the SMDI have been mitigated. Moreover, when comparing the “corrected” SMDI derived from SSM with the “corrected” SMDI derived from RZSM, the results show that the SMDI based on RZSM is more representative of the hydric stress level of the plants, given that the RZSM is better suited than the SSM to describe the moisture conditions at the deeper layers, which are the ones used by plants during growth and development.
The study provides an insight into obtaining robust, high-resolution derived drought indices based on remote-sensing derived SSM and RZSM estimates, for the improvement of resilient irrigation techniques. With the SSM-derived SMDI being currently used operationally and the RZSM-derived SMDI planned to be available soon, any improvement in the SMDI estimates will further improve irrigation advice.
ABSTRACT
Spaceborne radars for oceanography and hydrology use near nadir ranges because the backscattering signal is higher for these incidence angles. The sensors SWALIS (Still Water Low Incidence Scattering) and KaRADOC (Ka RADar for Ocean measurements) are developed for airborne radar measurements in Ka Band [1]. These sensors are dedicated to oceanography and hydrology applications for the climate purposes. Nevertheless, there are some slight differences between SWALIS and KaRADOC.
On the one hand, the objective of the SWALIS airborne radar system is to perform backscatter measurements (amplitude of the reflection coefficient of a rough surface: 0) at low incidences to characterize areas of hydrological area of interest in low wind conditions [2]. These measures are used to study the following points:
• inhomogeneities of roughness of hydrological surfaces and edge effect,
• conditions for obtaining cases of “dark water”,
• contrast water / banks,
• partially covered areas (crops, flooded areas).
Thus, the SWALIS sensor is intended to support calibration and validation operations for the future SWOT mission.
On the other hand, the KaRADOC radar system is based on the SWALIS architecture and is more dedicated for measuring ocean surface current velocity. We operated the KaRADOC sensor at 12°-incidence angle for the DRIFT4SKIM campaign which was organized in November 2018 [3]. In addition, included in the SUMOS campaign (Surface Measurements for Oceanographic Satellites) during the last February and March months, we used KaRADOC to measure radar echoes at multi-incidence angles and also to validate the SKIM concept.
To make the measurements physically interpretable (and more specifically for the SWALIS sensor), it is necessary to perform sensor calibration operations. This communication describes the procedures we develop to achieve calibration coefficients which are applied either to SWALIS or KaRADOC sensors. The calibration campaign is performed at the MERISE (MontErfil station for RadIo and remote SEnsing) station located near Monterfil (Ille-et-Vilaine, France). The calibration bench consists of a 15-meter-high tower on which we install the radar system (see Fig. 1) and a mast supporting the calibration targets i.e. trihedral corners (see Fig. 2). In order to comply with the free space conditions, we place the calibration targets 351 meters from the radar system (see Fig. 3).
The first step is to define the radar parameters applied during airborne measurements. We remind that we are using a leaky-wave antenna: the tilt angles therefore depend on the frequency. Thus, for this calibration campaign, we define 7 frequencies. Next, we define a set of 13 trihedral corners which are measured for the 7 chosen frequencies. For each frequency, the antenna is directed precisely towards the trihedral corners in order to obtain the maximum response. As the maximum RCS is well-known, we can relate the power reflected by the trihedral to its RCS. We finally apply a linear regression procedure to the measurements performed. We therefore obtain a linear relationship between the received power and the RCS of the trihedral as described in the radar equation (see Fig. 4). The calibration coefficients obtained are valid for a given distance. It is therefore necessary to transform these coefficients relative to the measurement distance during the airborne campaigns.
In the developed version of our communication proposal, we will come back in more detail to the calibration procedure: processing of the recorded radar data, modeling of the relationship between the reflected power and the calibration target RCS and the description of the calibration coefficients obtained.
REFERENCES
[1] MÉRIC S., LALAURIE J.-C., LO M.-D., GRUNFELDER G., LECONTE C., LEROY P., POTTIER É., SWALIS/KaRADOC: an airplane experiment platform developed for physics measurement in Ka band. Application to SWOT and SKIM mission preparations, In proceedings of 6th Workshop on Advanced RF Sensors and Remote Sensing Instruments & 4th Ka-band Earth Observation Radar Missions (ARSI’19 & KEO’19), ESA/ESTEC, 11-13 November 2019, Noordwijk, The Netherlands.
[2] KOUMI, J.-C., MÉRIC S., POTTIER É., GRUNFELDER G., The SWALIS project: First results for airborne radar measurements in Ka band, In proceedings of European Radar Conference (EuRAD 2020), Jan 2021, Utrecht, Netherlands.
[3] MARIÉ L., F. COLLARD, F. NOUGUIER, L. PINEAU-GUILLOU, D. HAUSER, F. BOY, S. MÉRIC, C. PEUREUX, G. MONNIER, B. CHAPRON, A. MARTIN, P. DUBOIS, C. DONLON, T. CASAL, AND F. ARDHUIN; Measuring ocean surface velocities with the KuROS and KaRADOC airborne near-nadir Doppler radars: a multi-scale analysis in preparation of the SKIM mission, Ocean Sci., 16, 1399–1429, 2020, https://doi.org/10.5194/os-16-1399-2020
In this paper, we focus on understanding the changes in the river environment of two physically and geomorphologically comparable rivers: the river Mura and its course through north-eastern Slovenia, and the river Vjosa in southern Albania. Both rivers share a common historical, geomorphological, and economic background. The difference between the two rivers is that the Mura is heavily dammed in its upper part in Austria and regulated in some sections throughout its course, while the Vjosa has remained almost natural. Based on our interdisciplinary approach combining remote sensing, anthropological, and geographical research within the EOcontext project*, we try to understand how human interactions have modified the river environment and how the river environment has affected people’s lives.
In order to answer this, we used multilevel change detection and time-series approaches to see and predict how human-induced influences (especially hydropower plants) can affect river environments. Heterogeneous river patterns in different geographic and topographic contexts were automatically mapped. At the same time, we conducted fieldwork and collected a wealth of in situ data as we are also interested in whether and how people living in these two riverine landscapes perceive, experience, live, and make sense of these changes.
Our workflow consists of three stages. First, we performed land use/land cover time series to detect intra-annual changes in surface water extent of the two different rivers. To do this, we used Landsat data (up to 2015) and Sentinel-2 imagery (from 2015 until present) of the rivers and wider riparian areas. In this way, we gathered a comprehensive overview of changes over the period of the last four decades. We used relatively simple change detection algorithms (based on classifications using SVM and RF approaches) to identify the areas of the most extensive change in the course of both rivers. For the Mura we considered four land cover classes (river, agriculture, mixed forest, and urban) and for the Vjosa we used five (the same as for Mura, with the addition of gravel). The gravel bars on the Mura are not visible with the 30 m Landsat resolution and were therefore not included in the classification. Even in the case of Vjosa, gravel bar classification is problematic as gravel represented a very small part of the training data.
Second, we applied the spectral signal mixture analysis to achieve more precise, subpixel mapping, considering only three main land cover classes (gravel, vegetation, and surface water). Each land cover class of interest was represented with an endmember or spectral signature of a pure pixel containing only the selected land cover class. To increase the separability of the land cover classes, we calculated several spectral indices and used them along with the reflectance of the spectral bands for the SSMA (MSAVI2, NDVI, NDWI, and MNDWI). The subpixel approach enabled more accurate mapping of riverine landscapes and was especially key in gravel bar mapping. The extent of gravel bars was monitored as a sign of the natural dynamics of river processes.
Third, the results of the remote sensing analysis were correlated and compared with the results of the field research. Based on the several field visits in the two areas under study, we questioned whether and how the inhabitants of these two areas perceive changes in their geophysical environment. Drawing on the many years of research experience and on the basis of semi-structured interviews with 40
interviewees, we identified changes in the riverine landscape in which they live. The data from the field research were analysed in their specific social, cultural, historical and political context.
The results of the remote sensing analyses are presented in the form of land use change maps, showing the extent of land use change and the extent of gravel bars. On Mura, the presence of different land cover classes is very uniform and stable. This is to be expected as the Mura is regulated in this area. However, hydrological data show that the Mura has lost the stability of water flow on its way through Slovenia, that the water level and the level of groundwater decreased due to the heavy damming on the upper and middle course, and, what is particularly problematic, that a deepening of the river bottom can be observed. But these present changes cannot be adequately detected with remote sensing analyses. In the case of the Vjosa, there is much greater variability in the presence of the different land cover classes over the years, as the river is much more dynamic and almost intact. The results also show a high correlation between the water surface area identified in the EO data and the water level measured in situ at the gauging station. Results from the field show that people observe most of the changes in their environment that we detected using the EO data though they understand and explain them in the language of their respective socio-cultural environments. The proposed methodology can be used to increase quantitative knowledge of river forms and processes over time. We also believe that combining different social, historical, geographical, hydrological, and ecological aspects adds more value to the understanding of the remote sensing results. Therefore, we stress the importance of contextualising the obtained spatial results.
*EOcontext (Contextualization of EO data for a deeper understanding of river environment changes in Southeast Europe) project is funded by the Government of Slovenia through an ESA contract under the EO science for society permanently open call.
Once, the Aculeo lagoon located in the central south of Chile represented together with the Maipo River, one of the two main water sources in the commune of Paine, Chile. Nevertheless, since the last decade, the Aculeo lagoon presented severe decrease in its water level, reaching a total dry up in May 2018. This happened from 2009 onwards. First, it lost 50% of its water surface in 4 years (2010-2014). During the following 2 years no further decrease was observed. But in 2017 its water surface reduced for another 50%. Finally in 2018 it completely disappeared. In order to explain this phenomenon, the aim of the present study was investigate parameters which might have forced its disappearance and can be observed from space and analyzed be remote sensing.
Therefore, in a first step, we calculated, visualized and analyzed the surface variations of Aculeo lagoon by applying automated pixel differentiation (Normalized Difference Water Index) from satellite images acquired by Landsat 7 and Landsat 8 between 2006 and 2019. Additionally, in order to analyze impact of rainfall and temperature variations on water level changes, Pearson correlation coefficient was calculated. In a second step, we furthermore added agriculture related parameters, such as evapotranspiration and watering, because the Aculeo sector is mainly characterized by agriculture activities. Therefore, we decided to apply the SEBAL (Surface Energy Balance Algorithm for Land) algorithm, which allows modelling of evapotranspiration, biomass growth, and water deficit considering soil moisture. In our case, the model was initiated using Landsat 7 satellite images, a digital elevation model and some climatic data collected from meteorological stations nearby the study area, as already required during the first step.
As a first result a direct correlation between the water surface variations of the Aculeo lagoon and rainfall was detected. In general, precipitations shown a continuous deficit since 2009, which in fact coincides with the so-called mega drought that affects great parts of the Chilean central south. Pearson correlation coefficient shows a positive correlation between decrease in rainfall and disappearance of Aculeo lagoon. The highest positive correlation coefficients can be detected in 2010, 2015 and 2018, which coincides with significant water surface reduction. Temperature variations do not have significant impact on the disappearance of the lagoon, although an increasing contamination was detected due to eutrophication process, which can be correlated to higher temperature average during study time. Nevertheless, Pearson correlation coefficient for temperature and water surface reduction is negative for all years. This means that temperature variations did not play a significant role in the disappearance of Aculeo lagoon.
With respect to land use in the vicinity of the Aculeo lagoon, due to the results obtained from SEBAL, it can be stated out that it remained quite similar over the years as agriculture is a very important source of income for the population. Nevertheless, there is evidence that people decided to plant crops that do not require as much water to grow and be harvested, such as wheat, oats, grapes, and some citrus. This is particularly notable as crops with the highest evapotranspiration are decreasing, and thus, those crops with lower evapotranspiration became more present. Furthermore, there is land, where agriculture completely disappeared as agriculture became less profitable.
Overall, it can be concluded that the Aculeo lagoon dried out due to a significant precipitation deficit for almost 10 years and that the overexploitation of land by agriculture activities did an important contribution, too.
Renewable green energy will be the most important part of energy development in the twenty-first Century, with Photovoltaic (PV) being considered as a key technology for this kind of energy supply. Monitoring and evaluating PV modules of power plants is of great importance to maintain and optimize the efficiency of solar energy systems, and reduce production costs for PV power plant operators. Earth Observation (EO) can acquire multitemporal information with different sensors on these targets of interest. Previous studies have shown the ability to detect PV areas from multispectral data (Malof et al., 2016, Yu et al., 2018) or hyperspectral data (Ji et al., 2021a). Since the reflectance of PV panel highly relates to the panel absorbed solar energy, hyperspectral data also have great potential to monitor the soiling status and process.
Airborne HySpex data was collected over Oldenburg, Germany, with two cameras covering the spectral ranges of visible near-infrared (VNIR) and short-wave infrared (SWIR) region. The VNIR sensor acquires 160 bands at a spatial resolution of 0.6 m. The SWIR sensor covers the spectral range in 256 channels at a spatial resolution of 1.2 m. Previous studies detected the spectral variation of PV modules for different detection angles with goniometer measurements, specifically including 61 measurements that cover zenith and azimuth angles of 0° to 75° and 0° to 330°, respectively. Data show that the BRDF effect of the reflectance of PV panel even affects the value of Hydrocarbon Index (HI), which is an important spectral feature of PV modules and thus could influence the detection accuracy (Ji et al., 2021). For the PV power plant at Oldenburg, Germany, a Digital Elevation Model (DEM) derived from the 3K camera is available, showing the panel elevation changes. With the difference of elevation of each PV system lines, we can derive the orientation angle of the PV modules. This can be used in turn to analyse the relationship between orientation angle and PV modules. In this study, we use airborne hyperspectral data in conjunction with a DEM and available ground truth on PV panels location and their area to be used as a reference, in order to investigate the potential spectral variation of PV modules with different orientation angles.
First, we applied the PV coverage vectors previously derived by Ji et al. (2021) to HySpex data and collected all spectra. Subsequently, we applied these vectors to DEM data, calculated the difference between the two sides of a panel, and obtain the average elevation difference. The elevation difference of the PV panels was then used to calculate their orientation angles associated with their width. Finally, a regression was performed between the spectra and the respective PV orientation angles and the result was analysed. In order to better study the spectral variability of PV modules, many factors need to be considered. One of them is the different installation angles of PV modules on the roof or PV system. The study aims at gaining a better understanding of the spectral variation of PV modules and in general to evaluate the potential of using hyperspectral data for PV module monitoring. Further research can be conducted with a broader and deeper knowledge of PV spectral variability.
References
• Malof, J. M., Bradbury, K., Collins, L. M., & Newell, R. G. (2016). Automatic detection of solar photovoltaic arrays in high resolution aerial imagery. Applied energy, 183, 229-240.
• Yu, J., Wang, Z., Majumdar, A., & Rajagopal, R. (2018). DeepSolar: A machine learning framework to efficiently construct a solar deployment database in the United States. Joule, 2(12), 2605-2617.
• Ji, C., Bachmann, M., Esch, T., Feilhauer, H., Heiden, U., Heldens, W., Hueni, A., Lakes, T., Metz-Marconcini, A., Schroedter-Homscheidt, M. and Weyand, S., 2021. Solar photovoltaic module detection using laboratory and airborne imaging spectroscopy data. Remote Sensing of Environment, 266, p.112692.
The world’s first large-scale offshore wind farm was installed at Horns Rev in the North Sea in 2002 – twenty years ago. Since then, the offshore wind energy sector has grown immensely to become a global business, which plays a major role for the green energy transition. The global offshore wind energy capacity was 35 GW in 2020 and offshore wind is considered to have the biggest growth potential of any renewable energy technology (Global Wind Energy Council, 2021). Wind turbines are getting bigger and bigger in terms of capacity, height, and blade size and new technologies are currently emerging such as floating offshore wind turbines.
Since observations of met-ocean parameters offshore are sparse, the wind energy industry relies largely on atmospheric modeling and short measurements campaigns for the planning of future wind energy projects. The use of EO data sets and derived variables is not yet widespread in this community and the learning curve for exploiting such data sets remains steep. As part of the H2020 project e-shape (https://e-shape.eu/), researchers from the Technical University of Denmark, Dept. of Wind Energy (DTU Wind Energy) have established co-design cycles with users from the wind energy industry. The objective is to better understand the industry views upon the usefulness and the usability of EO-based data sets – primarily wind maps retrieved from SAR and scatterometers and combinations of the two. We will present the main insights gathered from this work along with our most recent research and developments of satellite-based products tailored to wind energy applications.
Thanks to an almost uninterrupted supply of satellite SAR scenes from the European Space Agency – from the ERS-1/2, Envisat, and Sentinel-1 A/B missions – we have explored the potential of EO-based information for wind energy applications for two decades. At the earliest stages, our research was case oriented, as only a few wind farms existed and the amount of available SAR images was small. Nevertheless, it became evident how the impact of large offshore wind farms on the local wind conditions can be observed and quantified from SAR imagery (Christiansen & Hasager, 2003). The spatial extent of wind farm wakes, i.e. regions with reduced wind speed and increased turbulence, can be up to 100 km under the ideal atmospheric conditions. Today, we have hundreds of thousands of SAR scenes at our disposal and wind farm wake analyses are performed in a systematic fashion for many wind farms in sequence. This has led to new insights about the impact of e.g. coastal wind speed gradients, wind farm layouts, and turbine densities within the farms on the wind climate in the vicinity of large wind farms.
The fast growing archives of satellite SAR scenes also offer an opportunity to perform statistical analyses in order to map the available wind energy potential, or resource, over large offshore areas. Wind resources over the European seas have been mapped in connection with the New European Wind Atlas (Hasager et al. 2020) and annual updates of these maps are foreseen. The wind resource maps represent the outcome of several processing steps, which are performed in an automated fashion: 1) download of SAR scenes and ancillary data sets, 2) inter-calibration of the Normalized Radar Cross Section originating from different SAR sensors, 3) wind speed retrieval using a Geophysical Model Function (Hersbach et al. 2010), 4) re-projection to a uniform lat/lon grid, and 4) wind resource estimation. The procedure can be applied to any location in the world including the emerging offshore wind energy markets e.g. in Asia and the Americas.
Maps showing instantaneous wind fields retrieved from SAR imagery as well as the wind resource maps calculated over Europe are available through the Global Wind Atlas Science Portal at https://science.globalwindatlas.info/ (see also Figure 1). So far, the EO-based data sets are mostly browsed and downloaded by users from academia, including DTU’s own students and staff. In order to make the service more attractive for users in the wind energy industry, we have established co-design cycles with three types of industry users: wind farm developers, offshore wind consultancies, and providers of wind data to the industry. Representatives of these user categories have been interviewed and confronted with a prototype of our EO-based service. Feedback gained from the user interviews have been structured and the following cross-cutting requirements have been identified:
• Easy-to-read documentation of the EO-based data sets is needed e.g. a blog, explainers, and illustrative examples.
• EO-based parameters should come with quality flags e.g. indicators of bright targets, bathymetry effects, and atmospheric stability conditions.
• User-defined time series for specific points should be easy to extract in standardized formats - to be used in combination with other wind data sets.
• Co-located wind and wave height information is desired; especially for floating offshore wind energy.
Work is ongoing to improve the EO-based service according to the industry user’s inputs (Karagali et al. 2021). In the longer term, it will also be necessary to establish a sustainable business model where the costs associated with the service delivery are covered by the end users. Handling the TB of data associated with the processing of SAR data and scatterometer wind products drives an ever-increasing need for high performance computing and storage capacity. This represents another focus point of the e-shape project as well as for the EuroGEO and the Global Earth Observation System of Systems (GEOSS) communities.
Acknowledgements
The project ‘EuroGEO Showcases: Applications Powered by Europe’ (e-shape) has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement 820852. The European Space Agency (ESA) is acknowledged for satellite SAR scenes and ASCAT wind data are obtained from Copernicus Marine Service (CMEMS).
References
Christiansen, M. B., & Hasager, C. B. (2005). Wake effects of large offshore wind farms identified from satellite SAR. Remote Sensing of Environment, 98, 251-268. https://doi.org/10.1016/j.rse.2005.07.009.
Global Wind Energy Council (2021). Global Offshore Wind Energy Report 2021. 136 pp. Available online at https://gwec.net/global-offshore-wind-report-2021/.136.
Hasager, C. B., Hahmann, A. N., Ahsbahs, T. T., Karagali, I., Sile, T., Badger, M., & Mann, J. (2020). Europe's offshore winds assessed with synthetic aperture radar, ASCAT and WRF. Wind Energy Science, 5(1), 375–390. https://doi.org/10.5194/wes-5-375-2020.
Hersbach, H. (2010). Comparison of C-Band scatterometer CMOD5.N equivalent neutral winds with ECMWF. Journal of Atmospheric and Oceanic Technology, 27(4), 721-736. https://doi.org/10.1175/2009JTECHO698.1.
Karagali, I., Badger, M., & Hasager, C. (2021). Spaceborne Earth Observation for Offshore Wind Energy Applications. Geoscience and Remote Sensing (Igarss), Ieee International Symposium, 172–175. https://doi.org/10.1109/IGARSS47720.2021.9553100.
Satellite Global Precipitation Mission (GPM) and the combined state of the art Integrated Multi-satellitE Retrievals for GPM (IMERG) data are used to estimate the risk of rain erosion at wind turbines. Rain erosion at wind turbines cause a loss in profit at several wind farms. The power production is reduced for turbines operating with eroded blades (Bak et al. 2020). Repair takes place on average every 8 years and the cost for repair is high, in particular at offshore sites (Mishnaevsky and Thomsen, 2020).
The rain impinging on the leading edge of the blades causes damage. It typically starts near the tip of the blades where the blade speed is the highest, thus the closing velocity, i.e. the impact speed between the drops and the blade is the highest. Over time erosion will also occur further inwards at the blades. Roughly, the outer 1/5 of the blade may be affected by erosion. The eroded blades cause a loss in power production due to poorer aerodynamic performance. Blade repair can be done to limit the aerodynamic loss.
The current study focuses on a method to predict the risk of rain erosion at wind turbines using GPM and IMERG data as input to the blade damage model. The blade damage model was established through analysis of laboratory experiments with controlled rain fields and blade speeds (Bech et al. 2018). The rain erosion testing of blade damage is an accelerated method. For a specific blade speed and rain rate (or drop size), the damage is observed by inspection of the blade. Thus to calculate the rain erosion risk, the damage increment model is summing up the many events during time. This is representative of a typical leading edge protection coating.
Assuming the same type of leading edge protection coating at the actual wind turbines, the expected damage will be estimated using the wind speed as input, and a wind turbine power curve (translating the wind speed to rotations per minute for the turbine), the size of the rotor (or length of blades) and the rain events.
Due to the lack of local rain observations at most wind farms, alternative rain data are used for the mapping of the rain erosion risk (Hasager et al. 2021). The advantage of GPM satellite data is their global coverage. Also, the data are homogeneous, standardized products for several years available both on land and offshore. See https://gpm.nasa.gov/data/directory (level 3 data, final run, 30-minutes).
The rain erosion risk analysis is done for an offshore floating wind farm in Portugal in the Atlantic Sea. The wind farm is WindFloat Atlantic. In addition, an analysis is done for a nearby landsite in Portugal with available meteorological data on wind speed and rain intensity at 10-minute temporal resolution. The results on estimated lifetime are also done using ERA5 (hourly data) as input for both areas.
In summary, the results compare well with estimated lifetimes around 3 to 6 years on the land site and shorter lifetime at the offshore site.
Acknowledgements for funding support for the ESA project ARIA2 and the Innovation Fund Denmark project EROSION (grant 6154-00018B). GPM and IMERG data are from NASA. Meteorological data are from IPMA, the national meteorological, seismic, sea and atmospheric organization of Portugal. ERA5 data are from ECMWF, the European Centre for Medium-Range Weather Forecasts.
References:
Bak C, Forsting AM, Sørensen NN. (2020). The influence of leading edge roughness, rotor control and wind climate on the loss in energy production. Journal of Physics: Conference Series. 1618(5). 052050. https://doi.org/10.1088/1742-6596/1618/5/052050
Bech JI, Hasager CB, Bak C. (2018). Extending the life of wind turbine blade leading edges by reducing the tip speed during extreme precipitation events, Wind Energy Science, 3/2, pp. 729-748
Hasager CB, Vejen F, Skrzypinski WR, Tilg A-M. (2021). Rain Erosion Load and Its Effect on Leading-Edge Lifetime and Potential of Erosion-Safe Mode at Wind Turbines in the North Sea and Baltic Sea. Energies. 14(7). 1959. https://doi.org/10.3390/en14071959
Mishnaevsky L, Thomsen K. (2020). Costs of repair of wind turbine blades: Influence of technology aspects. Wind Energy. 23(12):2247-2255. https://doi.org/10.1002/we.2552
Monitoring of mining impact has become increasingly important as the awareness of safety and environmental protection is rising. For example, two catastrophic dam collapses occurred in Brazil in 2015 and 2019. The tailing outflow caused miserable loss of human lives (205 deaths and 122 missing combined) and of countless properties. An appropriate monitoring scheme is necessitated to legally activate, reactivate, and terminate mining operations.
Our project Integrated Mining Impact Monitoring (i2Mon), funded by European Commission – Research Fund for Coal and Steel, intends to monitor mining-induced impact, in particular, of ground movement. The monitoring system comprises terrestrial measurement and remote sensing: levelling, GPS, LiDAR scanning, UAV survey, and SAR interferometry. The aim is to launch an interactive GIS-based platform as an early warning and decision making service for mining industry.
Our presentation focuses on Work Package 2 – Space and Airborne Remote Monitoring. This package is to develop a SAR-based approach to monitor the mining-induced ground movement over an extensive area at millimetre level. We will first illustrate the monitoring scheme and approaches of estimating ground movement by advanced SAR interferometry. The first test site is a deactivated open-pit mine in Cottbus, Germany owned by Lausitz Energie Bergbau AG (LEAG). The whole area is being reconstructed into a post-mining lake. Therefore, monitoring the mining impact is in particular crucial for the safety. The second test site is located in Poland, where the underground mining operated by POLSKA GRUPA GÓRNICZA (PGG) began in June 2021. We must monitor the in-situ ground movement carefully as part of the influenced area covers settlements. We have analysed the ground movement across the open-pit and underground mines by implementing advanced SAR interferometry. The crucial parameters include stepwise movement series, instantaneous velocities and accelerations, and significance index. The results will be compared with local measurement such as GPS recordings collected alongside corner reflectors. All the data will be finally integrated into DMT’s platform – SAFEGUARD.
Earth Observation based energy infrastructures to support GIS-like energy system models
S. Weyand 1, M. Schroedter-Homscheidt 1, Th. Krauß 2
1 DLR, Institute of Networked Energy Systems, 26122 Oldenburg, Germany – (Susanne.Weyand@dlr.de, marion.schroedter-homscheidt)@dlr.de
2 DLR, Remote Sensing Technology Institute, 82234 Wessling, Germany – Thomas.krauss@dlr.de
Keywords: photovoltaic (PV) detection, Earth observation (EO), energy system analysis, airborne, satellite, energy infrastructure
Due to increasing urbanization worldwide, the increasing energy demand of urban residents, and the lower prices for photovoltaic (PV) and solar thermal modules, the number of plants in operation has increased significantly in recent years. Authorities and electricity grid operators are supporting the installation of solar power plants in order to achieve the Federal Government’s strategies for reducing CO2 emissions and primary energy consumption by 80% until 2050. For load modelling and the generation of demand and production statistics and planning, they need up-to-date roof usage and coverage information, as well as location data of the plants. There is also an increase in PV on roofs of residential and commercial buildings. However, many of these systems are not exactly registered and publicly available databases of solar modules are not up to date.
Monitoring strategies of solar plants are interesting for energy forecasting models in research, urban planning and industry. Currently, energy forecasting models are often based on community-based OpenStreetMap data (e.g. Alhamwi et al. 2018). However, these are partly faulty, have insufficient detailed information or have very different regional accuracy. Therefore, we start to collect energy-specific data with Earth observation techniques. Questions of energy system analysis are, for example, the modelling of load profiles in the electricity system.
Our focus is on energy load quantification in urban areas such as buildings and renewable energy sources detection, such as photovoltaics and solar thermal energy devices from flight and satellite data. We look into characteristic detection features of PV and solar thermal systems from airborne remote sensing data. At the institute, we have detailed knowledge of PV module construction from PV module research and contribute solar radiation data from the Copernicus Atmosphere Monitoring Service (CAMS) (Schroedter-Homscheidt et al., 2021). The extracted, characterized and geocoded PV and solar thermal systems are then used, for example, in self-developed energy modeling software.
Solar modules are built from a combination of different materials and minerals. Therefore, ultra-high-resolution airborne optical (Kurz, 2009) and hyperspectral (DLR, 2016) data was collected in the years 2018 and 2019 over the study region Oldenburg and Ulm. Both data sets are collected with the DLR OpAiRS System, mounted at Dornier Airplane and post-processed by DLR Remote Sensing Technology Institute colleagues. Atmospheric and georeferenced correction is done by the ATCOR 4 Processor (Richter et al., 2012).
Deep learning methods, so-called convolutional neural networks (CNNs), are used for optical data analysis to identify energy infrastructures, such as the detection of photovoltaic modules, and separate them from solar thermal and thin film modules.
Available laboratory spectra from goniometer measurements of mono-, polycrystalline and thin film photovoltaic modules (Gutwinski et al., 2018), as well as characteristic peak investigation, such as the normalized hydrocarbon index (nHI) (Clark et al., 2003 and 2009) of the ethylene vinyl acetate (EVA) layer of solar modules (Czirjak, 2017), were used to train a spectral indices algorithm for photovoltaic (PV) module detection. PV extracted by trained index analysis show validation accuracies up to 90.6%, but is restricted to mono- and polycrystalline photovoltaic module detection (Ji et al., 2021).
A definition of characteristic peaks for thin-film modules detection is ongoing. Additionally, based on optical flight data, building height, angle and the orientation of roof surfaces, as well as an ultra-high-resolution digital surface model for region of Oldenburg were generated. The impact on modeling results, in comparison with OpenStreetMap input data, is investigated.
Results based on high-resolution flight data can be further applied to commercial and free satellite data sets such as WorldView, Sentinel-2 and EnMap to enable large-scale, national or even European use. The balance between the loss of information due to the change in spatial resolution of the satellite data and the simultaneous gain of information is quantified and evaluated with regard to the relevance in energy system models.
References:
1. Alaa Alhamwi and W. Medjroubi and T. Vogt and C. Agert (2018); Modelling urban energy requirements using open source data and models, Applied Energy, Vo. 231, p. 1100-1108, DOI: 10.1016/j.apenergy.2018.09.164
2. Clark, R. N., G. A. Swayze, K. E. Livo, R. F. Kokaly, S. J. Sutley, J. B.Dalton, R. R. McDougal, and C. A. Gent (2003b), Imaging spectroscopy: Earth and planetary remote sensing with the USGS Tetracorder and expert systems, J. Geophys. Res., 108(E12), 5131, doi:10.1029/2002JE001847
3. Clark R., Curchin J. M., Hoefen T. M., Swayze G. A., 2009: Reflectance spectroscopy of organic compounds: 1. Alkanes, Journal of Geophysical Research E: Planets, Volume 114 (3); doi:10.1029/2008JE003150, http://pubs.er.usgs.gov/publication/70034984
4. D Czirjak, “Detecting photovoltaic solar panels using hyperspectral imagery and estimating solar power production,” J. Appl. Remote Sens. 11(2), 026007 (2017), doi: 10.1117/1.JRS.11.026007.
5. DLR Remote Sensing Technology Institute (IMF). (2016). Airborne Imaging Spectrometer HySpex. Journal of large-scale research facilities, 2, A93. http://dx.doi.org/10.17815/jlsrf-2-151
6. Martin Gutwinski, Prof. Dr. Carsten Jürgens, Dr. Andreas Rienow (2018); Analysis of the spectral variability of urban surface materials based on a comparison of
laboratory- and hyperspectral image spectra; unpublished Master Thesis at Ruhr-University Bochum, Geography Department, Geomatics/Remote Sensing Group
7. Ji, C., Bachmann, M., Esch, T., Feilhauer, H., Heiden, U., Heldens, W., Hueni, A., Lakes, T., Metz-Marconcini, A., Schroedter-Homscheidt, M. and Weyand, S., 2021. Solar photovoltaic module detection using laboratory and airborne imaging spectroscopy data. Remote Sensing of Environment, 266, p.112692.
8. Kurz, Franz (2009) Accuracy assessment of the DLR 3K camera system. In: DGPF Tagungsband, 18, Seiten 1-7. Deutsche Gesellschaft für Photogrammetrie, Fernerkundung und Geoinformation. DGPF Jahrestagung 2009, 2009-03-24 2009-03-36, Jena. ISSN 0942-2870.
9. R. Richter and D. Schläpfer, “Atmospheric / Topographic Correction for Airborne Imagery”, (ATCOR-4 User Guide, Version 6.2 BETA, February 2012)
10. Schroedter-Homscheidt, M., Azam, F., Betcke, J., Hoyer-Klick, C., Lefèvre, M., Wald, L., Wey, L., Saboret, L., (2021): CAMS solar radiation service user guide, technical report, DLR-VE, CAMS72_2018SC2_D72.4.3.1_2021_UserGuide_v1.
Tailings dams are generally large-scale geotechnical structures and ensuring their stability is of critical importance for safe and sustainable mine waste management. However, assessing dam stability remains a great challenge, and failures of significant scale keep occurring world-wide.
The following characteristics make tailings dams particularly vulnerable to failure: (a) embankments constructed of locally sourced fills (soils, coarse waste, overburden from mining operations and tailings); (b) multi-stage raising of the dam; (c) the lack of standardized regulations governing design criteria; and (d) high maintenance costs after mine closure. Upstream dams, where dam extensions are supported by the tailings themselves, are especially vulnerable to displacements which can trigger failure. The consequences of a dam failure can be severe, not only in the direct vicinity of the dams themselves, but also far downstream. Therefore, dam stability requires continuous monitoring and control during emplacement, construction, operation and after decommissioning.
Interferometric synthetic aperture radar (InSAR) has been applied to the study of many natural and anthropogenic phenomena. The availability of near-global coverage of SAR data collected with the current generation of satellite constellations has allowed for an unprecedented amount of data over mining sites, tailings storage facilities, and downstream waterways. Specifically, the European Union's Copernicus Program maintains a network of satellites, including the Sentinel 1 constellation that has provided open access radar data with medium spatial resolution and short repeat pass intervals since 2014.
We present the applicability of InSAR analyses for monitoring displacements on and around tailings dams for several selected case studies, both intact dams with only expected displacements and case studies on recently collapsed dams. For the latter, we further investigate the potential existence of precursors and the applicability of the inverse velocity approach to predict the date of failure. E.g. for the Brumadinho dam failure in 2019, time series back to 2015 were analyzed, comparing a number of acceleration periods with the one before the failure.
Urbanization and climate change are major tasks for cities in present times but will be even greater challenges in future, considering ongoing urbanization rates and temperature rise. In order to prepare cities for future realities, good practice in urban planning demands scenario development with an appropriate database. With this work, we present a methodology to generate and provide data for planners on residential electricity consumption data on a single building level based on building types and photovoltaic (PV) energy balances to develop strategies to decarbonize the energy mix. Belmopan, the studied city, and Belize need to import a large share of its energy demand from neighboring countries. In this context, decentral PV solutions can contribute to reducing energy dependencies to other countries.
Using information from aerial unmanned vehicle (UAV) orthomosaics, eight residential building types which include four single-family and four multifamily building types, are classified based on a random forest classifier and building specific parameters, such as building footprint area, building height, roof complexity or building footprint shape indices. Through conducting a household survey, statistics on residential electricity consumption in relation to the building type could be identified. Based on DSM information from UAV imagery processed with a structure-from-motion approach and data on solar radiation from the National Solar Radiation Database (NSRDB), the PV energy potential could be determined for each building. Through differencing PV energy potential and building type related energy consumption, PV energy balances are calculated on a single building level for the study areas.
To prove the capability to apply different framework scenarios in this methodology, we compared the effects of installing two PV panels on the best suited field of roof (FOR) for a realistic scenario and fully equipping the best suitable FOR with PV panels to simulate an ideal scenario. In the realistic scenario, an average of 29.5% of the energy demand in residential buildings can be covered through PV, the ideal scenario resulted in an average electricity coverage rate of 148%. In the ideal scenario, obviously building types with large and unfragmented fields of roofs can generate the highest PV energy surpluses, whereas in the realistic scenario, energy consumption determines the PV energy coverage rate. Therefore, socioeconomically weak groups can profit most from this scenario.
The presented methodology shows the ability to test different scenarios and to provide planning-ready data for urban infrastructure planning and therefore, contributes to close the gap between demands on data for urban planning and provided data from remote sensing approaches. Furthermore, the results underline the potential of PV power in Belmopan to significantly decarbonize the energy mix.
To achieve by 2050 the decarbonization goals set by the European Climate Foundation, it is crucial to meet the demand for Critical Raw Materials (CRM) and other commodities necessary for the production and storage of “green” energy (Blengini et al., 2020). Materials such as high purity quartz, rare earth elements (REE), lithium (Li), beryllium (Be), cesium (Cs), niobium (Nb) and tantalum (Ta) can be commonly found in pegmatite rocks. Therefore, the aim of the GREENPEG project is to develop and test an innovative multi-method exploration toolset to apply to both niobium-yttrium-fluorine (NYF) and lithium-cesium-tantalum (LCT) chemical pegmatite types. The final goal is to find outcropping and buried pegmatite deposits within Europe. The exploration toolset is being developed in three European demonstration sites: (i) Tysfjord (Norway); (ii) Leinster (Ireland); and (iii) Wolfsberg (Austria). Distinct exploration methods are being developed at different scales, namely province-, district- and prospect-scales.
This work focus on the province-scale methodology through the exploitation of available Sentinel-1 and Sentinel-2 data from the Copernicus program. The objectives of this work were to: (i) use Sentinel-1 synthetic aperture radar (SAR) images to identify province-scale tectonic structures, such as faults, that may have controlled pegmatite melt emplacement; and (ii) use Sentinel-2 images to directly identify the spectral features of the pegmatite bodies.
First, a satellite image database was built with all the necessary image tiles to cover all demonstration sites at a province-scale. Several criteria were defined for choosing the images during the search, namely: (i) the cloud cover (should be less than 10%), (ii) the vegetation coverage (defined through Normalized Difference Vegetation Index - NDVI computation); (iii) the snow coverage (defined through Normalized Difference Snow Index - NDSI computation); and (iv) the season of the year at the time of image acquisition. These criteria were employed to ensure that all images acquired present the lower cloud, vegetation and snow coverage possible. Sentinel-1 images were selected due to: (i) acquisition of images in the C-band (3.75–7.5 cm) and; (ii) easy integration with Sentinel-2 products. To choose the best Sentinel-1 images several criteria were taken into account: (i) the spatial coverage of the study areas; (ii) the adequacy of the product specifications considering the study objectives; and (iii) acquisition data close to the corresponding Sentinel-2 images already pre-processed. There are several acquisition modes, but the Interferometric Wide (IW) swath with dual-polarization (HH+HV or VV+VH, where H: Horizontal and V: Vertical) was selected due to its adequacy for land applications.
Regarding the pre-processing steps, they were achieved in Sentinel-1 Toolbox (S1TBX) for all Sentinel-1 SAR images, while for Sentinel-2 preprocessing was done using the Semi-Automatic Classification Plugin (SCP) under QGIS software (Congedo, 2016). After geographic clipping of the Sentinel-1 images to province scale, several pre-processing steps were followed, namely: (i) orbit correction; (ii) thermal noise removal; (iii) radiometric calibration; (iv) speckle filtering; (v) terrain correction; and (vi) final geographic trimming. In the case of the optical Sentinel-2 images, depending on the size of the study area, there could be a necessity to produce mosaic images to cover the entire study area. Image pre-processing included masking and mosaic creation, as mentioned before, and the atmospheric correction of the images (considering Dark Object Subtraction -DOS- technique) to obtain surface reflectance values.
Next, the lineaments were automatically extracted from both VV- and VH-polarised images Sentinel-1 images using the LINE algorithm of PCI Geomatica 2018 in a three-stage process: (i) edge detection; (ii) thresholding; and (iii) curve extraction. In each step, several parameters were optimized through a trial and- error method. After the automatic extraction of the lineaments, a visual inspection was conducted in the QGIS software to manually remove all lineaments related to the coastline and human infrastructure. For the Sentinel-2 data, several traditional image processing techniques were employed taking into account the algorithms proposed by Cardoso-Fernandes et al. (2019b): (i) RGB combinations; (ii) Band Ratios; and (iii) Principal Component Analysis – PCA.
Once all unwanted lineaments were removed in the visual inspection step, rose diagrams were built with the mean directions of the extracted lineaments. In Tysfjord, the VV polarization image allowed identifying more lineaments with a NE-SW trend, while the VH polarization enhanced structures along the ENE-WSW direction. However, most of the extracted lineaments are related to mountain ridges or regional structures (especially where the Caledonian nappes outcrop). This allied with the previously removed lineaments along the water and land transition in the original dataset indicates that topography had a large effect on lineament extraction. Several difficulties to the successful application of the traditional techniques to the Sentinel-2 images were identified such as: (i) the snow/vegetation coverage; (ii) the outcrop size versus the spatial resolution of the images; and (iii) the spectral confusion with other within-scene elements (e.g., roads, etc.). These are in line with the constraints identified in similar applications in the Iberian Peninsula (Cardoso-Fernandes et al., 2019a). However, these methods also allowed to identify possible areas of interest for pegmatite exploration. Moreover, the methods previously employed to detect LCT pegmatites also allowed detecting NYF pegmatites in Tysfjord. In some cases (RGB combinations), this meant that the traditional methods presented slight color differences.
The results obtained corroborate the potential of Sentinel-1 and Sentinel-2 data for pegmatite exploration at a province scale. Nonetheless, the Copernicus data needs to be further exploited in the future. For example, additional pre-processing of Sentinel-1 will be achieved to decrease the topographic effect. Also, a spectral library concerning pegmatite samples from the demonstration cases will be constructed and the reference spectra will be used to further refine the employed image processing methods. In the end, the results will be integrated with supervised classification approaches using machine learning algorithms (Teodoro et al., 2021) and with existing province scale radiometry, magnetometry, and electromagnetic data to produce target exploration maps for the three demonstration sites.
References
Blengini, G. A., Latunussa, C. E. L., Eynard, U., Torres de Matos C., Wittmer, D., Georgitzikis, K., Pavel, C., Carrara, S., Mancini, L., Unguru, M., Blagoeva, D., Mathieux, F. & Pennington D. (2020). Study on the EU's list of Critical Raw Materials Final Report. European Commission. https://doi.org/10.2873/11619.
Cardoso-Fernandes, J., Lima, A., Roda-Robles, E., & Teodoro, A. C. (2019a). Constraints and potentials of remote sensing data/techniques applied to lithium (Li)-pegmatites. The Canadian Mineralogist, 57(5), 723-725. doi: 10.3749/canmin.AB00004.
Cardoso-Fernandes, J., Teodoro, A. C., & Lima, A. (2019b). Remote sensing data in lithium (Li) exploration: A new approach for the detection of Li-bearing pegmatites. International Journal of Applied Earth Observation and Geoinformation, 76, 10-25. doi: https://doi.org/10.1016/j.jag.2018.11.001.
Congedo, L. (2016). Semi-Automatic Classification Plugin Documentation. Retrieved from DOI: http://dx.doi.org/10.13140/RG.2.2.29474.02242/1.
Teodoro, A. C., Santos, D., Cardoso-Fernandes, J., Lima, A., & Brönner, M. (2021, 12 September 2021). Identification of pegmatite bodies, at a province scale, using machine learning algorithms: preliminary results. Paper presented at the Proc. SPIE 11863, Earth Resources and Environmental Remote Sensing/GIS Applications XII, SPIE Remote Sensing, doi: https://doi.org/10.1117/12.2599600.
Acknowledgements
This study is funded by European Commission’s Horizon 2020 innovation programme under grant agreement No 869274, project GREENPEG New Exploration Tools for European Pegmatite Green-Tech Resources. The Portuguese partners also acknowledge the support provided by Portuguese National Funds through the FCT – Fundação para a Ciência e a Tecnologia, I.P. (Portugal) projects UIDB/04683/2020 and UIDP/04683/2020 —ICT (Institute of Earth Sciences).
The Copernicus Atmospheric Monitoring Service (CAMS) offers Solar radiation services (CRS) providing information on surface solar irradiance (SSI). The services meet the needs of European and national policy development and the requirements of partly commercial downstream services in the solar energy sector for e.g. planning, monitoring, efficiency improvements, and integration of renewable energies into the energy supply grids.
At present, the service is derived from Meteosat Second Generation (MSG). CRS provides clear and all-sky time series combining satellite data products with numerical model output from CAMS on the optical state of the atmosphere. The clear sky and all-sky products are available from 2004 until yesterday through the CAMS Radiation Service portal and the Atmospheric Data Store (ADS) in the Copernicus portal by making use of the SoDa portal capabilities.
The service quality is ensured through regular monitoring and evaluation of input parameters, quarterly benchmarking against ground measurements and automatic consistency checks.
Variability of solar surface irradiances in the 1-minute range is of interest especially for solar energy applications and such a variability-based analysis can help assess the impact of recent improvements in the derivation of all-sky irradiance under different cloudy conditions. The variability classes can be defined based on ground as well as satellite-based measurements. This study will show the evaluation of the CAMS CRS based on the eight variability classes derived from ground observations of direct normal irradiation (DNI) (Schroedter-Homscheidt et al., 2018).
The CRS service evolution includes its extension to other parts of the globe. The highlights of the framework development towards the operational Implementation, with a focus on HIMAWARI-8 by Japan Meteorological Agency (JMA) will be shown.
References:
CAMS Radiation Service (clear sky): http://solar.atmosphere.copernicus.eu/cams-mcclear
CAMS Radiation Service (all-sky): http://solar.atmosphere.copernicus.eu/cams-radiation-service
Copernicus portal: http://atmosphere.copernicus.eu/
Schroedter-Homscheidt, M., S. Jung, M. Kosmale, 2018: Classifying ground-measured 1 minute temporal variability within hourly intervals for direct normal irradiances. – Meteorol. Z. 27, 2, 160–179. DOI:10.1127/metz/2018/0875
The global distribution of the Cropping Intensity (CI) is critical to our understanding of the intensity of arable land use and management practices on the planet. The widespread availability and open sharing of satellite remote sensing data has revolutionized our ability to monitor large area cropping intensity in an efficient and rapid manner. High accuracy global cropping intensity extraction is a huge challenge due to significant differences in the fragmentation of cropland in different regions, diverse utilization patterns, and the influence of clouds and rain. The existing cropping intensity products have low resolution and high uncertainty, which make it difficult to accurately illustrate the real situation of highly heterogeneous and fragmented areas. This study uses massive multi-source remote sensing data for global cropping intensity mapping. All available images of top-of-atmosphere (TOA) reflectance from Landsat-7 ETM+, Landsat-8 OLI, Sentinel-2 MSI and MODIS during 2016–2018 were used for cropping intensity mapping via the GEE platform. To overcome the multi-sensor mismatch issue, an inter-calibration approach was adopted, which converted Sentinel-2 MSI and Landsat-8 OLI TOA reflectance data to the Landsat-7 ETM+ standard. Then the calibrated images were used to composite the 16-day TOA reflectance time series based on maximum composition method. To ensure data continuity, this study used the MODIS NDVI product to fill temporal gaps with the following steps. First, the 250-m MODIS NDVI product was re-sized to 30-m using the bicubic algorithm. Then, the Whittaker algorithm was applied to the gap filled NDVI time series to smooth the NDVI time series. We included two phenology metrics, mid-greenup and mid-greendown, which were derived as the day of year (DOY) at the transition points in the greenup and greendown periods when the smoothed NDVI time series passes 50% of the NDVI amplitude. An interval starting from mid-greenup and ending at mid-greendown is defined as a growing phenophase, and an interval moving from mid-greendown to mid-greenup a non-growing phenophase (Liu et al., 2020; Zhang et al., 2021). Using this algorithm, the Google Earth engine is used as the data processing platform, and the 5° grid is used as the data processing unit to extract the cropping intensity grid by grid, and finally the world's first set of 30-meter resolution cultivated land cropping intensity data product (GCI30) is developed. The validation results show that the overall accuracy of this data product is 92.9%, which is not only better than the existing cropping intensity data products, but also significantly improves the characterization ability of the spatial details of cropping intensity. GCI30 indicated that single cropping was the primary agricultural system on Earth, accounting for 81.57% of the world’s cropland extent. Multiple-cropping systems, on the other hand, were commonly observed in South America and Asia. We found large variations across countries and agroecological zones, reflecting the joint control of natural and anthropogenic drivers on regulating cropping practices.
The GCI30 dataset is freely available on the Harvard Data Commons (https://doi.org/10.7910/DVN/86M4PO), and the data product will provide scientific data to support the assessment of the global potential of cropland replanting, food yield increase, food security prediction and early warning, and the achievement of UN sustainable development goals such as zero hunger.
In the last decade, a lot of attention has been devoted to crop mapping [1] because of the need to better monitor and manage the food production [2]. This is particularly true for developing countries, where a proper knowledge on the status of agricultural areas is needed to ensure the development of the agricultural infrastructure in accordance with population and economic growth. In this context, this paper presents the activities planned in the framework of the project “Developing a spatially-explicit agricultural database in support of agricultural planning”. The project, supported by the Ministry of Agriculture, Forestry and Fisheries (MAFF) of Japan, will be implemented by the Statistics Division of the Food and Agriculture Organization (FAO) of the United Nations in close collaboration with the Ministries of Agriculture, National Statistical Offices (NSOs), academia and national/regional institutions of geoscience in selected countries of the Asian region. The project aims to increase the availability and quality of farmland information to support the definition of effective schemes of farming incentives, the formulation of smart agriculture/micro finance programs as well as to improve reporting on Sustainable Development Goal (SDG) 2.4.1 for sustainable agriculture. In this framework, the Faculty of Geo-Information Science and Earth Observation (ITC) of the University of Twente will be the main implementing partner of the FAO Statistics Division for the development of a geospatial database of rice farms.
In greater detail, ITC will develop a workflow for mapping rice fields boundaries in Cambodia and Viet Nam, where rice paddy occupy a large portion of the agricultural area in these countries (e.g., almost 80 percent of the harvested area in Cambodia). Although a lot of effort has been devoted in the literature to develop crop delineation methods [3], [4] these peculiar study areas require the definition of a tailored workflow in order to face two main challenges: (1) the cloud coverage which heavily affect most of the optical satellite data acquired over the year, and (2) the fragmented agricultural areas characterized by very small fields (i.e., less than 1 ha). In these conditions, boundaries delineation methods defined for High Resolution (HR) data such as Sentinel 2 might prove more difficult than in areas with large agricultural fields (see Figure 1). [5]. Within the project, we will investigate the use of Very High Resolution (VHR) multi-spectral imagery such as Planet and Worldview 3 data, which guarantee a very high geometrical detail (i.e., from 3m to 30cm spatial resolution). However, the main drawback of these data is their cost, which hampers their use from the operational view point. To provide a workflow which can be used to constantly update the crop boundary database, one of the goal of the project is to exploit the completely full, open and free Sentinel 2 satellite data. While the VHR optical images will be used to define a clear picture of the rice paddy boundaries within the agricultural year [6], the Sentinel 2 sensor guarantee a frequent coverage free of charge that can be employed to constantly update the rice paddy map. Both multitemporal and single-image approaches, based on the integration of Sentinel 2 and the VHR optical images [7], will be explored. Finally, we plan to investigate the possibility of leveraging on VHR and HR Synthetic Aperture Radar (SAR) images to mitigate the severe cloud coverage problem which may hamper the use of the optical images in some seasons.
The expected outputs of the project consist of: (1) the development of spatial layer of rice field boundaries — in the form of geospatial polygons, and (2) the assessment of the suitability of these layers to support farm-level data collection in the form of spatial, qualitative and quantitative attribute information for each farmland parcel/polygon and including farm-level data required to report on SDG 2.4.1. Field campaign activities will be planned to properly validate and refine the crop delineation results obtained. Technical guidelines will also be developed as part of project activities to address issues of scalability, maintenance and update of the spatial farm layers.
[1] G. Weikmann, C. Claudia and L. Bruzzone. "TimeSen2Crop: A Million Labeled Samples Dataset of Sentinel 2 Image Time Series for Crop-Type Classification." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14 (2021): 4699-4708.
[2] B. Mishra, L. Busetto, M. Boschetti, A. Laborte, and A. Nelson, "RICA: A rice crop calendar for Asia based on MODIS multi year data, International Journal of Applied Earth Observation and Geoinformation, 103, 102471”, 2021.
[3] Y. T. Solano-Correa, F. Bovolo, L. Bruzzone and D. Fernández-Prieto, "A Method for the Analysis of Small Crop Fields in Sentinel-2 Dense Time Series," in IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 3, pp. 2150-2164, March 2020, doi: 10.1109/TGRS.2019.2953652.
[4] K.M. Masoud, C. Persello, V.A. Tolpekin, “Delineation of Agricultural Field Boundaries from Sentinel-2 Images Using a Novel Super-Resolution Contour Detector Based on Fully Convolutional Networks," in Remote Sens. 2020, 12, 59. https://doi.org/10.3390/rs12010059
[5] M. Wu, W. Huang, Z. Niu, Y. Wang, C. Wang, W. Li, P. Hao, B. Yu,. "Fine crop mapping by combining high spectral and high spatial resolution remote sensing data in complex heterogeneous areas." Computers and Electronics in Agriculture 139 (2017): 1-9.
[6] C. Persello, V.A. Tolpekin, J.R. Bergado, R.A. de By, "Delineation of agricultural fields in smallholder farms from satellite images using fully convolutional networks and combinatorial grouping," in Remote sensing of environment, 231, 111253, 2019
[7] Rao, P.; Zhou, W.; Bhattarai, N.; Srivastava, A.K.; Singh, B.; Poonia, S.; Lobell, D.B.; Jain, M. Using Sentinel-1, Sentinel-2, and Planet Imagery to Map Crop Type of Smallholder Farms. Remote Sens. 2021, 13, 1870. https://doi.org/10.3390/rs13101870
This contribution addresses the identification of phenological phases of wheat, sugar beet, and canola by a complementary set of interferometric (InSAR) and polarimetric (PolSAR) time series derived from Sentinel-1 (S-1) Synthetic Aperture Radar (SAR). Breakpoints and extreme values are calculated during the growing season at DEMMIN (Germany), one of the test sites in the international Joint Experiment on Crop Assessment and Monitoring (JECAM). The in situ data used to validate such frameworks is gathered during various campaigns at the DEMMIN site. As for the first results of the year 2017, a distinction of vegetative and reproductive stages for wheat and canola could be achieved by combining breakpoints and extrema of S-1 features. Certain phenological stages, measured in situ using the BBCH-scale, such as leaf development and rosette growth of sugar beet or stem elongation and ripening of wheat, were detectable by a combination of InSAR coherence, polarimetric Alpha and Entropy, and backscatter (VV/VH). The general tracking accuracy, calculated by the temporal difference between in situ observations and breakpoints or extrema, varied from zero to five days. The largest number of breakpoints and extrema was produced by backscatter time series. Nevertheless, certain micro stadia, such as leaf development of BBCH 10 of sugar beet or flowering BBCH 69 of wheat, could be solely tracked by InSAR coherence and Alpha. In addition, the transition from early to late leaf development as well as early and late rosette development of sugar beet was successfully identified by a combination of InSAR coherence and Kennaugh matrix elements. Therefore, it is assumed that a complementary data base of PolSAR and InSAR features increases the number of detectable phenological stadia of wheat, canola and sugar beet. In regard to ongoing and future research, the challenges of integrating such a tracking framework in an Open Data Cube (ODC) environment to improve its scalability and transferability are discussed as well. Next steps will address the transferability and generalization of observations made during this study, i.e. in the context of a common JECAM experiment that includes crop phenology.
Keywords: PolSAR; InSAR; Kennaugh matrix; time series; Sentinel-1; crop phenology; DEMMIN, ODC
Biophysical and biochemical traits of plants are linked to the photosynthesis and nutrition processes during the growth cycle. There has been significant exploration and improved understanding of such traits in recent years, both from physical measurement and remotely-sensed estimates. Whilst they have explored both magnitude and correlations, such analyses have not directly explored the detailed temporal dynamics and co-dynamics of traits. The era of big data in Earth Observation (EO) changes that, especially the Copernicus Sentinel 2 (S2) mission.
In this study, we estimate and characterise the co-dynamics of the full set of crop canopy traits using the PROSAIL model over multiple years of S2 reflectance data over US, UK and China. 100,000 Random samples are taken from each S2 tiles over those regions for different crop types, taken from publicly available crop classification maps. Each sample S2 spectrum is mapped to each of the PROSAIL canopy parameters using a machine learning approach. As expected, many of the parameters estimated in this way are very noisy as they are only weakly constrained by the information in a single time period S2 observation. We seek then an empirical model of the dynamics of the suite of PROSAIL crop parameters to better estimate these parameters. From these large number of samples of each biophysical parameter, we normalise for phenology variations, then calculate and characterise what we propose as ‘archetype’ temporal patterns of crop traits for a set of crops. We validate these ‘archetypes’ with publicly available datasets. From the archetypes and phenology model, we develop an empirical model for the dynamics of crop biophysical parameters. Such a model allows us to simulate a full time series of hyperspectral reflectance for a given crop, using PROSAIL model and a localised soil constraint. This is the first time that we can simulate crop hyperspectral reflectance with temporal prior information on the biophysical parameters and phenology. Such a model expresses the full information content of the optical EO signal as interpreted by the PROSAIL model. It should allow us to distinguish crop types based on biophysical parameter dynamics and have many other uses. It should also allow us to assimilate data from any optical sensors reflectance measurements.
We demonstrate this empirical model within an ensemble based variational data assimilation system to provide per-pixel robust estimation of this full suite of PROSAIL plant traits from Earth Observation. Using a broad prior information on the phenology and magnitudes of bio-physical parameters, we simulate ensemble of time series of reflectance with our proposed model. Then, time series of satellite observations (S2) are matched to these to provide a posterior estimation of the biophysical parameters, with uncertainty. We test the retrievals over different crops and validate against ground measurements in the US, Germany and China. The validation shows close agreement between the retrieval and the independent ground measurements. We also examine issues of temporal and spectral sampling to show how such factors impact the uncertainty in derived biophysical parameters.
Wheat and maize make up two thirds of the world’s food energy uptake. In China, the North China Plain (NCP), the dominant winter wheat-summer maize double cropping system produces half of the country’s wheat and about a third of its maize. Monitoring this agricultural system, made up of many small farms over a large area is now possible at field scales, by making use of the frequent acquisitions from sensors with adequate spatial characteristics like Sentinel-2 (S2) or Landsat. Mechanistic crop growth models such as WOFOST provides predictions of crop development, functioning and yield as a response to meteorological forcing for a given set of model parameters. Here, we develop a variational data assimilation system that constrains that parameter set at the pixel level with S2-derived biophysical parameters, given a wide a priori parameter distribution, and apply it to yield prediction.
The a priori constraints are developed for the NCP using initial parameter distributions from global literature, calibrated to local conditions using pairings of official statistics on county-level yield and final storage organ mass estimates over Monte Carlo sampling of parameters matched to sample S2-derived LAI trajectories for each county of interest. In this same process, we estimate a “harvest index” that maps from model-simulated storage organ mass to grain yield (as reported in official statistics) and a clumping index.
The pixel-level variational data assimilation then proceeds to localise the parameter distribution with 20m resolution S2-derived LAI trajectories, which also produces localised yield estimates. Given the relatively slow variation of meteorological constraints, an ensemble distribution is efficiently calculated over a coarse resolution grid (10s of km) to define sample plausible trajectories over the a priori model parameter distribution that pertain to many Sentinel-2 pixels.
For each pixel, the LAI ensemble is compared with S2-derived LAI observations, and a bi-squared metric used to to develop the localised weighting on the parameter set and thence associated ensemble grain yields (and its uncertainty). We show the application of this approach to wheat and maize monitoring at multiple scales across the NCP.
The remote sensing community in general, and the land use/land cover classification community in particular, is currently in a stage where potential applications seem endless, high resolution satellite imagery at national levels is available for free, and sophisticated classification algorithms perform better with each new generation. This is especially true for the field of agricultural crop classification, where many studies were published in the past 5 years, that implemented one of many different iterations of deep-learning algorithms. Compared to the rapid development of new algorithms, the model input has been rather stable, mostly consisting of multispectral global sensors like Landsat, Synthetic Aperture Radar (SAR) satellites like Sentinel-1 and very high-resolution optical sensors like Gaofen.
Our study is contributing to the current trend by providing an extensible framework which trains classification models on small spatial and temporal subsets, validates these models on the same subsets, as well as on subsets that are spatially independent, temporally independent and both. Given these four validation scopes, a very robust and differentiated perspective on the generalization of a given model is possible.
We apply our framework on a lower mountainous region in west Germany, where the landscape is mostly defined by forests, pastures and maize fields. We classified maize in a binary classification for four different years ranging from 2016 to 2019. As input data we used monthly averages of Sentinel-1 backscatter coefficients. In our study, we showed that classical pixel-based machine learning classifiers such as random forests showed superior performance within the training scope. Modern deep-learning algorithms such as UNet however, showed significantly better performance on datasets that were from a different year or a different region. We concluded that convolution-based algorithms generalize much more consistently, and show no signs of overfitting on existing geometric field patterns. As such they show great promise in the development of fully operational crop classification models.
Climate change impacts accounted for a decline of 5.5% in wheat yield globally. The decline is expected to continue by another 1.6% due to trends in temperature, precipitation, and carbon dioxide. This study investigates satellite-based approaches for crop growth monitoring and yield forecasting in two different geographically located countries, Poland and South Africa. Cereal production in Africa is very low and wheat crop production accounts for only < 2% of all the wheat grown in the developing countries. South Africa and Ethiopia produce about 80% of the wheat on the continent. However, South Africa remains a net importer of wheat. Drought is one of the major natural disasters affecting agricultural production in both countries. Droughts occur almost each year, usually at different times of the growing season. The yield reduction depends on crop phenology when drought occurs.
The joint project between the two countries, investigated satellite based crop growth monitoring approaches using Terra MODIS, Sentinel2 in conjunction to ground based meteorological data to determine crop water requirements, time for irrigation, as well as crop yield predictions for winter wheat in both countries. Ground data acquired from the same period were used to develop the model for crop yield estimates and irrigation time requirement. The MODIS data consisted of eleven years of observation (between 2003 and 2021) and covered over 100 crop wheat fields.
Field measurements were conducted the Joint Experiment for Crop Assessment and Monitoring (JECAM) sites. The study areas are JECAM Poland: Wielkopolska cropland region in Western Poland, consisting of patched fields of mainly wheat, rape, sugar beet and maize, and JECAM South Africa: Eastern & Western Free State for winter (wheat) and summer (maize) crops. The Elementary Sampling Unit (ESU) for all measurements were a 30 x 30 m square, for the correct characterization of a 10 m Sentinel-2 pixel. The measurements were taken from the north, south, east, and west corners to capture all variation present within sites. Vegetation sampling was designed in a square and samples taken at different locations under representative conditions. The in-situ measurements include in particular: high-resolution spectral measurements covering the VIS-SWIR spectral range: 350 nm – 2500 nm, leaf area index, soil moisture, wet and dry biomass, type of vegetation cover and its phenological stage. The measurements were recorded every 3 weeks during the growing season. Meteorological conditions were continuously measured [i.e. air temperature and humidity, wind speed and direction, precipitation and net radiation]. The ground observations of the winter wheat fields consisted of the crop phenology and crop yield data. The air temperature data was incorporated into the crop yield model, the rainfall was used for validation, and afterwards the model was modified for Sentinel2 data.
The data were analyzed using the accumulated eight days of NDVI (MOD09Q1) and accumulated 8 days’ differences between LST (MOD11A2) and air temperature (TA) from meteorological data. The rapid increase in accumulated NDVI curve occurs at lower accumulated difference between LST and TA (∑LST-TA) and this resulted in high value of yield at the end of the season. During the dry season, however, the accumulated difference between LST and TA increased enormously resulting in lower rate of accumulated NDVI. At good crop growing season, crop heading occurred earlier at lower accumulated difference in temperature (∑LST-TA) than in the dry season and this has a direct response to crop yield. Crop water demand at development stages has been extracted from the analysis of crop growth conditions. The FPAR was used to determine the different crop phenologies. The results have been verified using meteorological data such as rainfall between the different crop phenologies, measured crop yield and ground truthing data.
The results obtained varied depending on the prevailing meteorological conditions in a given year and the fertilization as well as the irrigation methods used. In Poland, during the surveyed period, the highest yield was obtained for 2004 year (100.3 dt ha-1). Winter wheat had already moved into phenological phase heading at just 145 DOY. Values of ∑NDVI and ∑LST-TA were 6 and 45 oC. The average for the other years in which low yields were recorded was for the same parameters corresponding 5 and 80 oC at the heading stage. In the study area in South Africa, it was noted that in 2019, there were worse conditions for wheat crop development and there was a later increase in NDVI, while in 2020, there were better conditions for wheat crop development and there was already an increase in NDVI one month earlier. The highest yields (> 100 dt ha-1) were observed for fields which were tilled and irrigated and cultivar used Pan 3471, while the lowest yield (< 50 dt ha-1) for fields which were irrigated rainfed and fertilizer and herbicide applied once, cultivar used SST 347.
The research work was conducted within the project financed by the National Centre for Research and Development under Contract No. PL-RPA/02/SAPOL4Crop/43/2018., titled "SA Polish collaborative crop growth monitoring and yield assessment system for early warning utilizing new satellite Earth Observations data from Copernicus Programme".
Common Agricultural Policy of the European Union has been continuously implemented since many years with different solutions and approaches, always maintaining the full compliance with the sustainable development premises. Earth Observation techniques, such as remote detection of the crop types is necessary for the correct implementation of the assumptions of the CAP of the European Union by the competent paying agencies in individual EU Member States. For these purposes the most desirable solutions are those tailor-made, which translate innovative image processing methods into operational agricultural monitoring. In many cases cloud computing platforms became suitable places for developing such dedicated applications. Thanks to the immediate and direct access to Copernicus data repository with efficient and dynamically scalable computing power, one of the DIAS platforms - CREODIAS supports the CAP project conduction and tools implementation, such as Sen4CAP and Agrotech. Sen4CAP project was funded by the ESA and performed by a consortium led by the Université Catholique de Louvain, its outcome ready to use software as a service is available on CREODIAS cloud. The solution applies machine learning algorithms on Sentinel-1, Sentinel-2, Landsat 8 data combined with in-situ information from the Land Parcel Identification System (LPIS), in order to generate the following products as cultivated crop type map, grassland mowing product, vegetation status indicator and agricultural practices monitoring product. While the Sen4CAP is already being succsesfully used operationally by many European countries, the Agrotech project is in the evolution phase. The aim of the Agrotech project is to develop algorithms allowing to perform: crop types classification, detection of anomalies in crops for early detection of diseases and pests, biomass increase detection and physical damage detection. The technology will be based on the automatic analysis of combination of different Very High Resolution satellite images of the Earth, using the segmentation methods and machine learning algorithms, in particular deep neural networks, created in the project.
Early warning systems (EWS) play a fundamental role in food security at the global, regional and national scales. Yet, after more than 45 years of Earth Observation, the use of these data by agencies in charge of global food security remains uneven in its results, and discrepancies in crop condition classification regularly occur (Becker-Reshef et al., 2020). It seems more than necessary to strengthen the confidence of decision makers and politicians. Fritz et al. (2019) identified through a survey, different gaps in methods. They highlighted the need to better understand where the input data sets (precipitation and vegetation indices) have discrepancies, and the need to develop tools for automated comparison.
This study aims to respond partially to this need by conducting a comparative experiment of a set of vegetation growth anomalies produced by four Early Warning Systems in West Africa for the 2010-2020 period.
We first reviewed the crop monitoring systems of the Early Warning Systems in West Africa (Nakalembe et al., 2021), with a focus on the vegetation anomalies indices. Four systems were studied: FEWS-NET (Famine Early Warning Systems Network) developed by USAID (US Agency for International Development), the VAM (Vulnerability Analysis and Monitoring) seasonal explorer of the WFP (World Food Program), ASAP (Anomaly hot Spots of Agricultural Production) developed by the JRC (Joint Research Center) and GIEWS (Global Information and Early Warning System on Food and Agriculture) developed by FAO (Food and Agriculture Organization of the United Nations). These four systems contribute to the international CM4EW (Crop Monitoring for Early Warning) which is the GEOGLAM component devoted to countries-at-risk (Becker-Reshef et al., 2020).
Then, a set of vegetation growth anomaly indicators (one per EWS) was selected (NDVI-based), harmonized (standardized), then classified (9 anomaly classes) and compared in time and space. The extreme classes corresponding to less than 15% and more than 85% of the rank percentile values over the 2010-2020 period were respectively labelled as “negative alarm” and “positive alarm” classes (the other classes were grouped under the label “absence of alarm”).
This exploratory work revealed that, despite a common satellite image data set (mainly MODIS NDVI), there are spatio-temporal divergences of the anomaly classes, especially when the seasonal variations are considered. Considering the alarm classes (positive, negative, absence), the use of a cropland mask slightly strengthens the annual similarities between the four EWSs, and thus was used in the following comparisons. The "two by two" analyses displayed similarity between 52% (FEWS-NET and GIEWS) to 70% (VAM and ASAP). The four systems together displayed similarity between 24.5% to 33.7%. In terms of trend over the 2010-2020 period, the systems show no significant trends in terms of percentage of the negative alarm class, except FEWS-NET (p-value < 0.05).
The spatio-temporal divergences could be explained by the diversity of methods used by the different EWSs for NDVI anomaly calculations (products, smoothing, spatial and temporal resolution). In order to go further in the interpretation of these divergences, next step will be to compare these anomalies to other spatial sources of data, such as anomalies of vegetative biomass currently simulated by the AGHRYMET-CIRAD agrometeorological model SARRA-O, or in the longer term to textual information extracted from local newspaper articles using automatic language processing tools.
To conclude, this exploratory study provides new perspectives in the comparison of anomaly products of EWS in West African which remains a challenge in the current environment where more and more products are emerging.
References:
Becker-Reshef I. et al., « Strengthening agricultural decisions in countries at risk of food insecurity: The GEOGLAM Crop Monitor for Early Warning », Remote Sensing of Environment, vol. 237, p. 111553, févr. 2020, doi: 10.1016/j.rse.2019.111553.
Fritz S. et al., « A comparison of global agricultural monitoring systems and current gaps », Agricultural Systems, vol. 168, p. 258‑272, janv. 2019, doi: 10.1016/j.agsy.2018.05.010.
Nakalembe C. et al., « A review of satellite-based global agricultural monitoring systems available for Africa », Global Food Security, vol. 29, p. 100543, juin 2021, doi: 10.1016/j.gfs.2021.100543.
It is well established that, due to a changing climate, global sea level is increasing and that large-scale weather patterns are changing. However, these changes are not geographically uniform and are not steady in time, with short-term variability on a range of time scales (seasonal and inter-annual). It has been shown that, taking into account socio-economic factors, several regions are particularly vulnerable to changes in sea level. At highest risk are coastal zones with dense populations, low elevations, appreciable rates of subsidence and inadequate adaptive capability.
There is a strong imperative to improve awareness of coastal hazards and promote sustainable economic development in marine areas. A key challenge in the implementation of coastal management is the lack of baseline information and the subsequent inability to effectively assess current and future risk.
Access to enhanced regional information on coastal risk factors (sea level, wave and wind extremes) improves planning to protect coastal communities and safeguard economic activity. This information can contribute to increased industrial and commercial competitiveness in the maritime sector, which is heavily dependent on access to accurate relevant oceanographic information. For port operations, sea level heights and tidal currents are vital for operational efficiency. Wind and wave climatologies are fundamental to infrastructure design and operational planning of offshore activities. Coastal tourism and human settlement are equally affected by these parameters and therefore sharing skills and enabling access to currently difficult to obtain satellite data are significant development steps.
The challenge is to provide access to data on sea level, wind and waves and to support understanding of variation in these key ocean features as they change seasonally, inter-annually and due to climate change. It is important to measure and understand these regional and short-term variations, so that appropriate planning and adaption measures can be implemented. This will enable organisations to better plan operational activities, infrastructure development and the protection of communities, ecosystems and livelihoods.
The Coastal Risk Information Service (C-RISe) project has provided satellite-derived data on sea level, winds, waves and currents to support vulnerable coastal populations in adapting to the consequences of climate variability and change.
The project has enabled institutions in the partner countries of Madagascar, Mozambique and Mauritius, to work with the C-RISe products to inform decision-making. It has enabled effective uptake of C-RISe data by commercial and operational sectors in the region and contributes to the improved management of coastal regions, enabling these countries to build increased coastal resilience to natural hazards.
• C-RISe has provided data essential for understanding coastal vulnerability to physical oceanographic hazards not otherwise available to partners, due to the lack of tide gauges in the region and the expertise required to process the satellite data.
• Software and training materials enables partners to validate and analyse these data in ways that are relevant to their specific needs and activities.
• Capacity building increases the understanding of the value of these data in addressing coastal risk. It also increases the number of organisations and individuals capable of working with satellite data, and facilitates work towards the application of data within Use Cases.
• The Use Cases have facilitated operational uptake by the partners, integrating C-RISe data into their work streams and providing examples for dissemination and training.
The project offers several opportunities to expand, including increasing the range of data and information provided; increased geographical coverage; and a wider capacity building remit. In building local capacity and focusing on the development of Use Cases in line with our partners’ needs, C-RISe has demonstrated the vast range of issues that these data can be used to understand and address.
This presentation will introduce the project, summarise key findings, and present results from the Use Cases. It will also present recommendations for further capacity building in regions with similar challenges and levels of resource.
C-RISe was funded by the UK Space Agency under the International Partnership Programme, which was designed to partner UK space expertise with overseas governments and organisations. It was funded from the Department for Business, Energy and Industrial Strategy’s Global Challenges Research Fund (GCRF)
SOLSTICE (Sustainable Oceans, Livelihoods and Food Security Through Increased Capacity in Ecosystem research in the Western Indian Ocean) is a four-year international collaborative project that aims to strengthen capacity in the Western Indian Ocean (WIO) region to address challenges of food security and sustainable livelihoods for coastal communities, where millions of people are dependent on small-scale (subsistence and artisanal) fisheries. This presentation will introduce two related SOLSTICE studies that are based on satellite observations concerned with identifying upwelling and ocean fronts in the WIO, with the eventual aim of producing potential value-added products.
The study region in the WIO are heavily influenced by the monsoon seasons with distinct phases during December to February (Northeast Monsoon) and May through September (Southeast Monsoon). These monsoon seasons drive changes in ocean physics and hence biogeochemistry through current and wind-driven mechanisms; resulting in changes in current direction as well as seasonal upwelling.
The first study is concerned with developing a data driven algorithm for identification and classification of the seasonal Somali upwelling. The methodology uses remotely sensed daily chlorophyll-a and sea surface temperature (SST) data sourced from GlobColour (https://hermes.acri.fr/) and OSTIA (https://ghrsst-pp.metoffice.gov.uk/ostia-website/index.html), respectively. To detect upwelling areas, an unsupervised machine learning (K-means clustering) approach is used, which successfully delineates upwelling core, upwelling surrounds, as well as non-upwelling ocean regions. The technique is shown to be robust with accurate classification of unseen data. Once upwelling regions have been identified, classification of extreme upwelling events was performed using confidence intervals derived from historical data. The combination of these two approaches provides the foundation for a near real time upwelling information system.
There are a wide variety of algorithms for ocean front detection based on different ocean variables. Frontal zones are important for many fisheries and can be used to target locations to maximise catch. As part of the SOLSTICE programme, we describe a simple algorithm for detecting regions associated with large ocean fronts from satellite SST (OSTIA) and apply the same approach to outputs from a numerical ocean model (NEMO). This approach has proved capable of readily identifying only the main oceanic frontal zones and their variability and location throughout the year.
Both strands of work have shown promise within their respective regions with the possibility of further application within and beyond the WIO. These identification methods have the potential for aiding fisheries management as well as providing broader scientific insights into WIO physical and biological processes.
The Special Priority Program (SPP-1889) ‘Regional Sea Level Change and Society - SeaLevel’ (2016-2022), funded by the German Research Foundation (DFG), performs a comprehensive, interdisciplinary analysis to advance our knowledge on regional sea level change (SLC), while accounting for the human-environment interactions and socio-economic developments in the coastal zone. During its second funding phase (2019-2022), SeaLevel consists of 15 projects, bringing together over 65 natural and social scientists from 23 German research institutions and a wide range of disciplines, such as physical oceanography, geophysics, geodesy, hydrology, marine geology, coastal engineering, geography, sociology, economics and environmental management. By combining diverse modern methodologies, observations and models, natural and social scientists jointly aim to create a scientific knowledge base for quantitative, integrated coastal zone management, which can be applicable to many endangered places globally and essential for safety, coastal/land use planning, and economic development.
The SeaLevel program focuses on the North and Baltic Seas with potential impacts on Germany, and the South-East Asia/Indonesia region, encompassing coastal megacities, low-lying islands and delta regions, in order to understand how coastal vulnerability, adaptation and response strategies towards SLC vary in distinctly different socio-politico-economic and cultural contexts.
The main research activities of SeaLevel are: a) Contributions to global and regional sea level changes, b) Regional biophysical and social impacts in North Europe and S.E. Asia/Indonesia, and c) Adaptation, decision analysis and governance. These research objectives include to improve the physical knowledge of SLC and regional-to-local scale projections, investigate which socio-institutional factors enable/hinder coastal societies to cope with SLC, determine the natural and social coastal systems’ responses to future SLC, and assess adaptation and risk governance strategies under given technical, cultural, socio-politico-economic constraints. Such integrated analyses require SLC information (local SL projections, storm surges, waves and extremes), uncertainty and risk measures to be provided at the coastlines.
In this presentation, we will describe the goals and status of the SeaLevel program, while overviewing particularly recent results from different SeaLevel natural and social science studies in the coastal zone, which benefit from the usage of remote sensing observations and in synergy with models and other observations.
Mangroves are highly productive tropical and subtropical ecosystems at the interface between land and sea. They provide (i) important ecosystem services to coastal communities, (ii) habitat for birds, fish, crustaceans and other invertebrates, and their root systems are particularly attractive to juvenile fish seeking shelter from predators. Mangroves also allow carbon sequestration in the soil, reduce coastal erosion and attenuate waves, providing valuable protection against climate change effects. However, globally, the extent of mangroves continues to decline, mainly due to human population growth associated to coastal development and global environmental changes.
Since the mid-2000s, there has been an increased awareness of the services rendered by mangroves. In addition to being recognized by multilateral environmental protection agreements (CITES, Ramsar...) mangroves have been the focus of many international research or conservation programs.
Remote sensing has been widely proven to be an essential tool for monitoring and mapping highly threatened, often difficult-to-access mangrove ecosystems. It provides important information for habitat inventories, detection and monitoring of changes, support for ecosystem assessment (biomass and regeneration capacity), monitoring and management of natural resources, as well as ecological and biological functions. The comprehension of mangrove ecosystems benefits greatly from the current context of Earth observation, which offers a multiplication of sensors providing data of different nature (optical and SAR), resolutions (spatial, temporal and spectral), and an open data and open source environment in constant growth.
In this context, we propose a new methodological framework dedicated to the monitoring of mangrove dynamics based on remote sensing. It aims at proposing a standardized approach for data processing in a generic framework. Ready-to-use remote sensing products will be provided for future on-line webservices, destinated to local stakeholders, policy makers and actors of mangrove dynamics monitoring and conservation.
This framework integrates multi-sensor remote sensing data, including Landsat, Sentinel, SPOT, Pleiades, PlanetScope, ALOS PALSAR and GEDI combined to in-situ measurements. IRD’s long experience and numerous ongoing research projects also allow to provide a wide range of remote-sensing products.
The site of Bombetoka bay in Madagascar was chosen to develop this methodological framework, based on multi-sensor remote sensing data and in-situ measurements. We combine unsupervised and supervised algorithms as described in figure 1.
Figure 1: the methodological framework combining unsupervised and supervised processes for the monitoring of mangrove from High (HR) and very high (VHR) resolution remote sensing data.
An unsupervised texture-based approach (FOTOTEX, https://framagit.org/espace-dev/fototex) is used to identify mangrove units using VHR data (SPOT, Pléiades). In these units, we combine VHR and HR data to extract descriptors at the unit scale.
More specifically, at high-resolution:
Landsat time series are used to characterize the distribution and long-term evolution of mangrove in relationship with sedimentary dynamics over time and space
NDVI time series from Sentinel 2 provide up to date mangrove distribution maps and indicators on the evolution of mangrove cover due to natural and anthropic processes in order to assess gain/loss of mangrove over defined areas
Sentinel 1 time series provide insights on the hydrodynamics of the bay (maximum water extent, water permanency) which is strongly related to mangrove evolution
and at very-high resolution, Pleiades and SPOT 6/7 images allow the extraction of mangrove features (texture, density, fragmentation…). The latter are combined to other variables such as AGB, canopy height (from GEDI) to characterize mangrove types (as landscape units) at fine scale.
Then in IOTA 2 (https://www.theia-land.fr/product/iota-2/), these descriptors are used as training datasets in a random forest algorithm dedicated to the mapping of mangrove units from Sentinel 1 and 2 timeseries.
A field campaign will validate which variables and features are the most effective to discriminate mangrove units in the Bombetoka bay in order to (i) assess the accuracy and reliability of the method and (ii) if necessary, adapt the methodological framework.
This approach is designed for reproducibility and genericity in order to favour (i) the updates of maps and indicators, (ii) the deployment of the method on other mangrove sites, (iii) the availability of standardized products based on remote sensing for mangrove description, monitoring and conservation. A specific on-going effort will result in a web interface that will offer in the near future innovative services for the community of users involved in mangrove monitoring.
Themes: Mangrove forest monitoring – remote sensing – Coastal areas - conservation
The ongoing climate change and the pressures created as an outcome of it have created the need to assess the sustainability degree of the coastal zone, especially in populated areas. City beaches, apart from the remarkable importance on the citizens' well-being, compose a large economic sector, which is particularly observed in a small-scale study. The beach area, simultaneously forms a natural barrier between the sea and the mainland, absorbing high hydrodynamic pressures, directly protecting the urban coastal zone from sea flooding. By articulating the anthropogenic and environmental pressures that urban beaches receive, while assessing the economic impacts on the local community, one understands the delicate balance in which these dynamic systems are. Coastal towns with sandy "pocket beaches" make a very popular tourist destination worldwide, especially in Greece.
With the rapid urban sprawl of coastal cities and the underrated dangers posed by Sea Level Rise (SLR) due to climate change, the need to study the sustainability of urban beaches has arisen. Defining Santorini Island as a pilot area, an attempt was made to compile a protocol for assessing the sustainability of beaches, with an emphasis on urban beaches. By collecting physical and socioeconomic data from the entire coastal zone of Santorini with remote sensing data and field measurements, two vulnerability indicators (physical and social) were applied in order to visualize the vulnerability on all beaches of Santorini. Following the beach's vulnerability rate, possible flooding zones of urban areas are studied from 3 climate change scenarios (RCP4.5, RCP6.0, RCP8.5). Moreover, by adding the facilities and infrastructures in the flood risk zone assessment, the potential economic loss is calculated.
To expand the use of the evaluation protocol, an open-access web service is created. Through existing data and the implementation of the hydromorphological models to the pilot area, the user can visualize the spatial zone of flood risk due to sea surges and climate change and estimate the caused economic damage. The protocol attaches indexes for estimating the current sustainability state of the beach with the use of the tourism carrying capacity index, as well as a fused physical and socio-economic carrying capacity index. Furthermore, the user will be given the option to calculate the tourist carrying capacity on any beach of his choice, entering his data and exporting the results through the platform.
The platform targets to inform the public about the pressures received by the coastal zone and the potential risks that the inhabitants of the coastal areas may be facing. The introduction of the social vulnerability will highlight the anthropogenic pressures received by the coastal zone due to the beach's exploitation. The service aims to be used as a supplemental tool in coastal zone management, as well as to inform and raise public awareness of the dangers posed by the exploitation of the urban beach and the effect of climate change.
The lack of monitoring and study of anthropogenic pressures in the coastal environment can create the impression that these dynamic systems are endangered only by the existing pressures of climate change. Through this service the user will be informed about all the risks of an urban coastal environment, being studied through the three aspects of sustainability, economic, environmental and social. The aim of this platform is to raise public awareness on issues of climate change and sustainable exploitation through user-friendly tools. The study on the vulnerability of Santorini’s coastline can be an auxiliary tool in decision-making by experts and management authorities.
Vietnam is one of the countries most affected by climate change. The study region in the Central Highlands is already vulnerable to extreme weather events such as those caused by the El Niño climate pattern. El Niño events typically occur every two to seven years and often result in severe droughts during the dry season in Vietnam’s Central Highlands, a region with a tropical savannah climate. The droughts have significant impacts on agricultural production, the economic and socioeconomic sector, and the environment. The last severe event took place in 2015/2016. The effects of anthropogenic climate change could exacerbate this situation. The Central Highlands is one of the most important agricultural regions in Vietnam, growing coffee, rubber, pepper, cashew nuts, vegetables, and fruits, all of which are in high demand worldwide and have enormous export value. In this study, Earth observation time series are analyzed to examine the condition and development of vegetation in Dak Lak and Dak Nong provinces. Current land use is analyzed for regionally adjusted mapping using Sentinel-1/2 time series to obtain details not known in the available information products, e.g., separation of cash crops such as coffee and rubber, which are of interest for subsequent land use type-specific evaluation. Further analyses focus on examining the spatial and temporal development of vegetation in the context of the recent 2015/2016 El Niño event and the consequences that may have resulted. Moderate Resolution Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI) time series are investigated for the annual dry season, December through March, from 2000 to 2020. The EVI is analyzed to determine monthly vegetation condition and its deviation from the 20-year mean, in order to provide long-term insights into the effects of drought on vegetation over two decades, as well as to distinguish normal from dry years and to identify other extreme events.
Since 1984, we have witnessed cross-sectional sandstorms in the southern and southwestern provinces of the country, which reached their peak in 2008 and 2009 and created many problems for the people of the southern, southwestern and western provinces of the country and up to Tehran. Dust concentration in Khuzestan province has been reported up to 3200 micrograms per cubic meter, which is more than 14 times the allowable limit.
Studies show that the Mesopotamia region, which includes 13 regions including Iraq, southern Iran, southern Syria and northern Saudi Arabia, and the region of southwest Asia, including 10 regions on the central plateau of Iran and the Red Sea, including 13 regions in Egypt, northeastern Sudan and It includes northwestern Saudi Arabia.
Studies show that the main cause of dust in these areas is climate change, which has several factors involved in these changes, including prolonged droughts that are associated with climate change, the greatest effect of which is to reduce Rainfall, the phenomenon of global warming as a result of greenhouse gases, fossil fuels and improper use of natural resources that have caused the destruction of vegetation and the creation of sandy and dusty roads, which should be the source of element harvesting through the management plan. Step-by-step methods as well as trackers were examined and according to environmental conditions, control methods were provided to prevent the formation of sandstorms and dust.
Since Spain's accession to the European Union in 1986, major land use transformations have been observed, which were often driven by European, national and sub-national policies. At the same time, large areas of Spain are part of the dryland ecoregion, which are particularly sensitive to ecosystem degradation and affected by climate variability and long-term changes. The good availability of data as well as past and ongoing research makes Spain an interesting case study not only to observe land transformations in the context of political, socioeconomic, and climatic conditions, but also to understand their influence on spatial patterns of change.
Hill et al. (2008) and Stellmes et al (2013) analyzed dominant land cover and land use transformations in Spain between 1989 and 2005 using NOAA AVHRR time series analyses (MEDOKADS, Koslowsky (1998)). For this purpose, several simple land surface phenology metrics, i.e., mean NDVI of the growing season, annual amplitude, and timing of maximum NDVI, were derived and linear trends were calculated. In addition, land use information (Corine Land Cover), precipitation time series, and population trends were used to consider the drivers of land cover change. Key observed processes included land abandonment in rural marginal areas and concurrent urbanization trends, as well as an increase in land use intensity associated with the exploitation of water resources.
With the studies ending in 2005, there would now be an opportunity to conduct the research over a longer period to analyze which processes have been dominant over the past 15 years. The spatial resolution of 1km x 1 km constrained the previous studies because changes within a pixel are only detectable when the magnitude and proportion of specific land use changes are large enough to alter the NDVI signal in a significant way. Mediterranean ecosystems in particular often have fine-scale heterogeneity that therefore cannot be resolved with these coarse resolution data (Stellmes et al., 2010). However, these changes may be highly relevant, particularly for local/regional land management and for understanding individual actor’s decisions (Lambin and Geist, 2006). MODIS data can also only provide limited improvement in this regard, as it is still quite coarse at 250 m x 250 m resolution.
Due to advances in the free availability and pre-processing of Landsat imagery, it is now possible to generate higher temporal resolution time series for many areas of the globe, allowing us to draw conclusions about land surface phenology metrics and their changes even at finer spatial resolution. The objective of this study is to investigate the extent to which it is possible to use Landsat time series to characterize land use change similar to the MEDOKADS study. For this purpose, we used the entire available Landsat data archive since 1986. The data were preprocessed using FORCE (Frantz, 2019) to ensure a data consistency in space and time. Based on the resulting time series, we derived land surface phenology metrics and their trends. As a first step, we compared the Landsat-based trend analyses with the MEDOKADS data for the period covering 1989 to 2005. For the mean NDVI as well as the amplitude the Landsat data show comparable trends in general, but with a much finer spatial structure. The quality of the timing of the maximum NDVI, on the other hand, is strongly dependent on the temporal density of Landsat images within each year and varies significantly. Therefore, a more robust measure is needed, e.g. the season in which the maximum NDVI occurs. In a further step, we will analyze the entire Landsat time series from 1986 to 2021 and present first results.
References:
Frantz, D. (2019): FORCE – Landsat + Sentinel-2 Analysis Ready Data and beyond: Remote Sensing 11, 1124. http://doi.org/10.3390/rs11091124.
Hill, J., Stellmes, M., Udelhoven, T., Röder, A., & Sommer, S. (2008): Mediterranean desertification and land degradation Mapping related land use change syndromes based on satellite observations. Global and Planetary Change, 64, 146-157.
Koslowsky, D., 1996. Mehrjährige validierte und homogenisierte Reihen des Reflexionsgrades und des Vegetationsindexes von Landoberflächen aus täglichen AVHRR-Daten hoher Auflösung. Institute for Meterology, Freie Universität Berlin, Berlin.
Lambin, E.F., Geist, H.J., (2006): Land Use and Land Cover Change. Local Processes and Global Impacts. Springer Verlag, Berlin, Heidelberg, New York.
Stellmes, M., Udelhoven, T., Röder, A., Sonnenschein, R. and Hill, J. (2010): Dryland observation at local and regional scale - comparison of Landsat TM/ETM+ and NOAA AVHRR time series. Remote Sensing of Environment, 114 (10), 2111–2125, doi:10.1016/j.rse.2010.04.016.
Stellmes, M., Röder, A., Udelhoven, T. & Hill, J. (2013): Mapping syndromes of land change in Spain with remote sensing time series, demographic and climatic data. Land Use Policy, 30, 685-702.
During the last week of October 2021 an intense mediterranean hurricane (medicane), named Apollo by the Eumetnet Storm Naming project, affected many countries on the Mediterranean coasts. The deaths toll peaked up to 7 people, due to flooding from the cyclone in the countries of Tunisia, Algeria, Malta, and Italy.
The Apollo medicane persisted over such areas for about one week (24 October – 1 November 2021) and produced very intense rainfall phenomena and widespread flash-flood and flood episodes especially over eastern Sicily on 25-26 October 2021.
CIMA Foundation hydro-meteorological forecasting chain, including the cloud-resolving WRF model assimilating radar data and in situ weather stations (WRF-3DVAR), the fully distributed hydrological model Continuum, the automatic system for water detection (AUTOWADE), and the hydraulic model TELEMAC-2D, has been operated in real-time to predict the weather evolution and the corresponding hydrological and hydraulic impacts of the Medicane Apollo, in support of the Italian Civil Protection Department early warning activities and in the framework of the H2020 LEXIS and E-SHAPE projects.
This work critically reviews the forecasting performances of each model involved in the CIMA hydro-meteorological chain, with special focus on temporal scales ranging from very short-range (up to 6 hours ahead) to short-range forecasts (up to 48 hours ahead).
The WRF-3DVAR model showed very good predictive capability concerning the timing and the location of most intense rainfall phenomena over Catania and Siracusa provinces in Sicily, thus enabling also very accurate discharge peaks and timing predictions for the creeks hydrological network peculiar of eastern Sicily. Based on the WRF-3DVAR model predictions, the daily run of the AUTOWADE tool, using Sentnel-1 (S1) data, was anticipated with respect to the schedule in order to quickly produce a flood map (S1 acquisition performed on Oct. 25th, 2021 at 5.00 UTC, flood map produced on the same day at 13.00 UTC). Moreover, considering that no S1 images of eastern Sicily were available during the period Oct. 26-30, 2021, an ad hoc tasking of the COSMO-SkyMed satellite constellation was performed, again based on the on the WRF-3DVAR predictions, to overcome the S1 data latency. The resulting automated operational mapping of floods and inland waters was integrated with the subsequent execution of the hydraulic model TELEMAC. The medicane Apollo case study paves the way to future similar applications in the Mediterranean areas where intense rainfall processes are expected to become more frequent in light of the ongoing climate change phenomena.
Climate change is intensifying the water cycle, bringing more intense precipitation and flooding in some regions, as well as longer and stronger droughts in others. The number of short-term and highly localized phenomena, such as thunderstorms, hailstorms, wind gusts or tornadoes, is expected to grow further in the coming years, with important repercussions in air traffic management activities (ATM). One of the challenges for meteorologists is to improve the location and timing of such events that develop on small spatial and temporal scales. In this regard, the H2020 Satellite-borne and IN-situ Observations to Predict The Initiation of Convection for ATM (SINOPTICA) project aims to demonstrate that numerical weather forecasts with high spatial and temporal resolution, benefiting from the assimilation of radar data, in situ weather stations, GNSS and lightning data, could improve the prediction of severe weather events for the benefit of air traffic control (ATC) and air traffic management (ATM).
As part of the project, three severe weather events were identified on the Italian territory which resulted in the closure of the airport with heavy delays on arrivals and departures as well as numerous diversions. The data of the numerical simulations, carried out with the Weather Research and Forecasting (WRF) model and the 3D-VAR assimilation technique, will be integrated into Arrival Manager - air traffic control and management system. Arrival Manager generates and optimizes 4D trajectories avoiding areas affected by adverse phenomena with the objectives of increasing flight safety and predictability and reducing controllers’ and pilots’ workload. In addition to the numerical simulations, a nowcasting technique called PHAse- diffusion model for STochastic nowcasting (PhaSt) has been investigated to further improve ATC supporting systems highly localized convective events. This work presents the results of the WRF and PhaSt experiments, for the Milan Malpensa case study of 11 May 2019, demonstrating that it is possible to improve the prediction of such events in line with expectations and ATM needs.
Funded by the European Commission, the H2020 EuroSea project has the objective to improve the European ocean observing system as an integrated entity within a global context, delivering ocean observations and forecasts to advance scientific knowledge about ocean climate, marine ecosystems, and their vulnerability to human impacts and to demonstrate the importance of the ocean to an economically viable and healthy society. In the framework of this project, our goal is to improve the design of multi-platform in situ experiments for validation of high-resolution SWOT observations with the aim of optimizing the utility of these observing platforms. To achieve this goal, a set of Observing System Simulation Experiments (OSSEs) are developed to evaluate different sampling strategies and their impact on the reconstruction of fine-scale sea surface height fields and currents. Observations from CTD, ADCP, gliders, and altimetry are simulated from three nature run models to study the sensitivity of the results to the model used. Different sampling strategies are evaluated to analyse the impact of the spatial and temporal resolution of the observations, the depth of the measurements, the season of the multi-platform experiment, as well as the impact of changing rosette CTD casts for a continuous underway CTD or gliders. The reconstructed fields are obtained after applying the classic optimal interpolation algorithm to the different configurations of the simulated observations. In addition, other methods of reconstruction based on (i) machine-learning techniques, (ii) modelling data assimilation and (iii) the MIOST tool are tested. The analysis focuses on the western Mediterranean Sea, in a region located within a swath of SWOT during the fast-sampling phase.
The MedRIN (Mediterranean Regional Information Network; ) established in 2018, is a network to share developments and further Earth Observation (EO) scientific collaboration amongst European, North African, Levant, and American colleagues. The MedRIN is coupled with the framework of the Global Observations of Forests Cover and Land Dynamics (GOFC-GOLD; https://start.org/programs/gofc-gold/)) and serves as a liaison between land-cover/land use change remote sensing scientists and stakeholders in the Mediterranean Region. MedRIN keeps its members well-informed with the latest advancements in Earth Observation applications based on NASA and ESA satellite data and data products. Furthermore, MedRIN aims to support tackling regional and local challenges, as described by the United Nations Sustainable Development Goals (SDGs). The objectives of the MedRIN network are based on the priority topics of the Mediterranean region and the neighboring countries: 1) Urban and built-up areas (wildland urban interface, population dynamics and how that affects landscape), 2) Rural areas / Agriculture, Forestry and wildlands (monitoring dynamic landscape changes), 3) Hazards (fires including agricultural fires, earthquakes, floods, etc.) , 4) Soil and water resources management (Irrigation/Hydrology, Soil degradation, 5) Desertification), Climate change, 6) Education/Training to be a major component of all proposed priorities (TAT NASA-ESA model) & State of the Art Techniques (Artificial Intelligence). In accordance with the GOFC-GOLD family of networks the following MedRIN objectives have been established: a) Better coordination and linkage of monitoring systems and databases across the Mediterranean community member countries; b) Strengthening and upgrading regional/national EO networks; c) Alignment of multi-modal and multi-source data compliant to international norms; d) Utilization of Copernicus and relevant freely distributed services in the region by end users; e) Contribution to free publicly-available data through interoperable databases and services.
The additional benefit for the Mediterranean region, will be the synergies emerging from the collaborative efforts. The MedRIN is accessible to any entity and individual in the region for peaceful purposes and is expected to produce results and services for the well-being of the citizens and the sustainability of the use of resources throughout the region. Existing networks and collaborations are leveraged, while cooperation across disciplines and levels of decision and implementation throughout the stakeholder's spectrum are supported. The network also will help support common participation in projects /proposals and strive to develop collaborative structures to enable such. The MedRIN hosts annual meetings and workshops welcoming Mediterranean researchers to share in their scientific development, mature relations with colleagues in the region, and provide training to young scientists and community members on various EO topical areas, particularly focused on land cover change dynamics. The MedRIN also participates in joint meetings and workshops with other regional networks where common themes and issues are discussed, and collaborations established (i.e., South Central European Regional Information Network (SCERIN). MedRIN aims at keeping its members abreast with the latest advancements in earth observation applications based on NASA and ESA satellite data and data products, and the MedRIN includes training and capacity building as major components of all its activities. The MedRIN coordinators are collaborating to develop a regional “inter-institutional” program which would enable Master and PhD students working on MedRIN issues to transfer between different institutions in the network. The network has also facilitated the participation of young scientists from the MedRIN region in a previous solicitation from NASA for collaboration on land-use/land-cover change issues.
This presentation will further describe the MedRIN network, its outreach capabilities and priorities and future plans for network functions and planned events in 2022 and beyond.
The EXCELSIOR project (www.excelsior2020.eu) has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No 857510 and from the Government of the Republic of Cyprus through the Directorate General for the European Programmes, Coordination and Development. The EXCELSIOR project aims to upgrade the existing Remote Sensing and Geo-Environment Lab established within the Cyprus University of Technology into a sustainable, viable and autonomous ERATOSTHENES Centre of Excellence (ECoE) for Earth Surveillance and Space-Based Monitoring of the Environment. The ECoE for Earth Surveillance and Space-Based Monitoring of the Environment will provide the highest quality of related services both on the National, European and International levels through the ‘EXCELSIOR’ Project under H2020 WIDESPREAD TEAMING. The vision of the ECoE is to become a world-class Digital Innovation Hub (DIH) for Earth observation and Geospatial Information becoming the reference Centre in the Eastern Mediterranean, Middle East and North Africa (EMMENA) within the next 7 years. There are some distinct needs and opportunities that motivate the establishment of an Earth Observation Centre of Excellence in Cyprus. These are primarily related to the geostrategic location of Cyprus to solve complex scientific problems and address concrete user needs in the Eastern Mediterranean, Middle East and Northern Africa (EMMENA) as well as South-East Europe. The ECoE has the potential to become a catalyst for facilitating and enabling International cooperation in EMMENA. The starting point for fostering regional scientific collaboration and exploiting untapped market opportunities in EMMENA are several EO and RS networks, which include GEO, MEDRIN NASA, NEREUS, , GEO-CRADLE, as well as the network chains from the ECoE partners of TROPOS NOA and DLR.
The ECoE will have the following flagship equipment:
• Satellite data direct receiving station: The ECoE, in cooperation with DLR, will establish an EO Satellite Data Acquisition Station (DAS) to be able to directly receive data from EO satellite missions, which will allow Near Real Time (NRL) monitoring and thereby provide time-critical information for science and products within the receiving cone of the station, namely, over the EMMENA region. The ECoE will acquire and process data in direct pass-through mode over the Eastern Mediterranean area. Cyprus comprises a unique location for this antenna, as it will be located in the farthest South-Eastern location within the European Union, thus providing an extended coverage compared to other European antenna locations, including a wide range of data from Eastern Europe, Northern Africa and the Middle East. This includes critical real time maritime surveillance areas such as the Eastern Mediterranean, the Black Sea, the Caspian Sea, the Persian Gulf and the Red Sea.
• Ground-based atmospheric remote sensing station (supersite for aerosol and cloud monitoring): The ECoE, in cooperation with TROPOS, will establish a ground-based atmospheric remote sensing station (GBS) by consolidating all necessary infrastructure to set up a supersite for calibration/ validation, aerosol and cloud monitoring. The instruments to be installed within the GBS include a Cloudnet Station with SLDR-Cloud Radar, Microwave Radiometer, Doppler lidar and Laser Disdrometer, a PollyXT lidar provided by TROPOS, as well as auxiliary instruments for the Cloudnet station, including Ceilometer, SAEMS/DOAS and Cloud scanner.
The ERATOSTHENES CENTRE OF EXCELLENCE stakeholders hub can be used to engage in the EMMENA region more collaborations for the benefit of the citizens and the region. Indeed, this presentation shows how the EXCELSIOR TEAMING PROJECT & ECoE strategy can strengthening cooperation on earth observation in the EMMENA region. in different areas such as Climate changes, disaster risk reduction, water resources management, data analytics, energy etc.
Blue Economy encompasses those sectors and activities related to oceans, seas and coasts,
such as fisheries, energy, aquaculture, natural resources, logistics, safety and security,
transport, port activities, tourism, and shipbuilding repairs. Europe represents one of the
leading maritime power in the world. In 2018, the EU Blue economy generated €750 billion in
turnover and €218 billion in gross value added and directly employed about 5 million people.
Satellite applications bring an added-value in creating innovative and sustainable growth path
for many industries, as in the maritime domain. In this context, satellite technology provides
marine operators with reliable real-time information while ensuring coverage of vast and
unreachable areas. At regional level, a growing attention is dedicated to the Mediterranean
area, currently threatened by multiple challenges: biodiversity protection, increased human
activities due to overtourism, disaster management.
Satellite data provides a plethora of reliable and easy-to-use solutions for aquaculture,
fisheries, algal bloom, safety and security, and coastal development, to name a few.
Nevertheless, the take-up of satellite-based solutions in the region is far from being achieved.
Scepticism persists from the end- users’ side due to a series of factors as a lack of clear
communication with service providers; a poor understanding of the benefits related to the
integration of satellite-based solutions in their workflow; financial constraints; and a lack of
knowledge and competencies for implementing and using satellite-based services efficiently.
Recently, Eurisy launched the initiative Space4Maritime. The objective is to identify and
understand the needs of European maritime end-user communities, facilitating the dialogue
with the space industry and the uptake of satellite services. In this frame, Eurisy started a
series of interviews with end-users mostly located in the Mediterranean region. The overall
objective is to identify the existing operational solutions applicable in the area through
examples of practical uses of EO as well as the bottlenecks that harness the potential of
satellite applications for the sustainable growth of the Blue Economy. The paper will mainly
address public authorities, providing them with a set of recommendations on how to foster
cooperation with maritime operators. Lastly, the paper also targets potential new end-users
interested in integrating satellite solutions in their workflow.
The project
Soil sealing – also called imperviousness – is defined as a change in the nature of the soil leading to its impermeability. Soil sealing has several impacts on the environment, especially in urban areas and local climate, influencing heat exchange and soil permeability; soil sealing monitoring is crucial for the Mediterranean coastal areas, where soil degradation combined with drought and fires contributes to desertification.
Some artificial features like buildings, paved roads, paved parking lots, and other artifacts can be considered to have a long duration. In general, these land cover types are referred to as permanent soil sealing because the probability of coming back to natural use is low. Other land cover features included in the definition of soil sealing can be considered reversible. For them, the probability of coming back to natural use is higher. The land cover classes that are included in the reversible soil sealing have been defined with the users of the project, and include solar panels, construction site in early stage, mines and quarries, long-term plastic-covered soil in agricultural areas (e.g., non-paved greenhouses).
The project Mediterranean Soil Sealing, promoted by the European Space Agency (ESA) in the frame of the EO Science for Society – Mediterranean Regional Initiative, aims to provide specific products related to soil sealing, its degree and reversible soil sealing over the Mediterranean coastal areas by exploiting EO data with an innovative methodology capable to optimise and scale-up their use with other non-EO data. Such products have to be designed to allow – concerning current practices and existing services – a better characterisation, quantification and monitoring within time of soil sealing over the Mediterranean basin, supporting users and stakeholders involved in monitoring and preventing land degradation. The project started in March 2021, will produce the first results in March 2022 and the final products in March 2023.
The targeted products are high-resolution maps of the degree of soil sealing and the reversible soil sealing over the Mediterranean coastal areas (within 20km from the coast) for the 2015-2020 time period, at yearly temporal resolution with a targeted spatial resolution of 10m.
Stakeholders, products exploitation and geoanalytics indicators.
The involvement of stakeholders and end-users is an essential element of the project, as stated by ESA in the call for proposals. Since from the early stage of the proposal, efforts have been made to reach a diversity of users and stakeholders; the presence of ISPRA in the consortium is a plus for the project in this sense.
We group the users into classes: municipalities; sub-national agencies or local governmental institution; national institutions and research centers; regional institutions (EEA) and international (UN). Users are kept updated and focused on project activities by providing them concrete elements on which to ask for direct feedback. A questionnaire was shared with the stakeholders and discussed the result in a dedicated workshop held on 28/05/2021. About 20 people, from 13 different institutions, participated in the workshop. The users are also involved in the definition of a new way to serve them the project results. Instead of delivering just a set of maps, the team is developing an extensive collection of indicators and analytics that will be integrated into an interactive dashboard that will allow the users to access quickly and easily the information they need.
The team
The project team is led by Planetek Italia, and composed by ISPRA and CLS.
Planetek Italia is in charge of the development of the infrastructure, the engineering of the algorithms and the communication activities. CLS is in charge of the soil sealing mask and of the experimental reversible soil sealing processing algorithms, ISPRA of the soil sealing degree processing algorithms. The interaction with the users is led by ISPRA, institutionally involved in the land degradation theme into international and regional organisations and the national body responsible for the theme in Italy.
Methodology
Introduction
The project uses Sentinel-2 Level 1C as optical/multispectral source of data and Sentinel-1 SLC as radar source. Different in-situ data are prepared for the machine learning steps depending of the target. The developed methodology aims at being suitable to the Mediterranean coasts through automatic processing of satellite data.
Considering the heterogeneity of landscapes and the extent of the Mediterranean area, three alternative approaches for S-2 calibration have been developed, to cope with the availability of training data: NDVI Calibration, Linear Regression, and Artificial Neural Network. All these three methods share the common preprocessing of data.
The mission of the International Charter Space and Major Disasters is to facilitate the acquisition and the delivery of EO data at no cost to support disaster management and humanitarian relief operations in areas of the world affected by natural or man-made disasters.
In this framework the EO data received from the Charter members is provided to the end-users via the Charter Operational System (COS-2) managed by the European Space Agency. Since 2017, ESA has been involved to augment COS-2 with an on-line processing environment to facilitate the access and processing of big volumes of EO data (often hundreds of images) provided by the Charter members into an activation. After prototyping, development and the transfer to operations, in September 2021 this platform, named ESA Charter Mapper, was officially opened to support Charter operations.
The main objective of the ESA Charter Mapper is to support the Charter Project Manager (PM) and the Value Adder (VA) during an Charter activation with the provision of a suite of on-line EO-based services with co-located multi-sensor EO data collections ingested from COS-2.
This platform is the first massive cloud processing platform handling a constellation of 42 satellites (33 EO missions) from 15 Agencies and using state of the art technology such as Kubernetes, TiTiler, SpatioTemporal Asset Catalog (STAC) and COG formats. STAC Assets of EO data are catalogued in the ESA Charter Mapper using Common Band Names (CBN) classes that refer to common band ranges in the EM spectrum that allows a one-to-one mapping of multi-mission and multi-sensor bands.
The ESA Charter Mapper lets PM/VA access multi-sensor EO data and metadata, perform visual analysis, and perform EO-based processing to extract geo-information from imagery.
The current service portfolio includes Pre-Processing, Advanced, and Specialised processors for specific hazard types. Two main types of Assets can be derived from both systematic and on-demand processing services: Visual Products (multiple-band Assets Overview images as grayscale or false color RGB composites) and Physical meaning Products (single-band Assets for TOA reflectance, Brightness Temperature, Sigma Nought in dB, spectral indexes, burned areas, surface displacements, flood and hotspot bitmasks).
Concerning visualization of EO data, multiple pre-defined RGB band composites can be directly viewed by PM/VA in the map at full resolution after the systematic calibration of ingested optical and radar EO data. Furthermore users can also combine single-band Assets of Calibrated Datasets to create custom intra-sensor RGB band composites on the fly. Users can also visually compare pre- and post-event images directly in the map using a Vertical Slider Bar and apply GIS functionalities to get pixel values, visualize on the fly changes in the imagery by stretching the histogram, and crop images. These visual change detection tools are quite versatile, they can be applied to many different natural disasters, and effective to have a fast overview of the most affected areas. The comparison of thematic maps is also possible, allowing to depict the evolution of catastrophic events.
In terms of EO data exploitation, thanks to the automatic generation of Assets, as the EO data is received by ESA Charter Mapper, each processing service is able to generate geo-information products systematically or on-demand within very short times (e.g. spectral indexes, pan-sharpened images, binary map from change detection algorithm, combination of SAR intensity and multitemporal InSAR coherence). Use cases and processing results from selected activations will be presented in this work.
FLEX Instrument Flight Model Subsystems Features and Performance: Focal Plane System, Low Resolution Spectrometer, and Double Slit Assembly
H. Bittner1, Q. Mühlbauer1, M. Ibrügger1, C. Küchel1, A. Altbauer1, A. Serdyuchenko1, P. Sandri1, M. Kroneberger1, M. Erhard1, G. Huber1, A. Althammer1, Y. Gerome1, R. Wheeler2, T. Phillips2, Z. Locke2, P. Trinder2, S. Betts2, C. Greenaway2,3, Alejandro Fernández4, Alberto Antón4, Matthias Mohaupt5, Falk Kemper5, Uwe Zeitner5, Uwe Hübner6, Alexander Kalies7, Matthias Zilk7, Matthias Burkhardt7, Michael Helgert7
1) OHB System AG, 82234 Wessling, Germany
2) Teledyne e2v, Chelmsford, Essex, UK
3) Physics Department, Imperial College London, UK
4) Airbus CRISA, 28760 Tres Cantos, Spain
5) Fraunhofer Institute for Applied Optics and Precision Engineering (IOF), 07745 Jena, Germany
6) Leibniz Institute of Photonic Technology, 07745 Jena, Germany
7) Carl Zeiss Jena GmbH, 07745 Jena, Germany
The FLuorescence EXplorer (FLEX) constitutes ESA’s eighth Earth Explorer Mission (EE8), the corresponding space-borne FLEX instrument is the FLuORescence Imaging Spectrometer (FLORIS), operating in the 500-780 nm spectral band. FLEX will provide information regarding the vegetation fluorescence signal, essential for a quantitative evaluation of the status of health of vegetation.
The FLORIS instrument incorporates a High (HRSPE) and a Low Resolution Spectrometer (LRSPE), fed by a double slit beamsplitting assembly (two slits with 84 µm x 44.1 mm each), itself illuminated through a nadir-looking telescope followed by a polarization scrambler. The operative spectral regions are 500–758 nm for LRSPE and 677–780 nm for HRSPE. The two imaging spectrometers are operated in push-broom mode. The Focal Plane of the LRSPE has a single detector unit with a spectral sampling of down to 0.6 nm while the Focal Plane of the HRSPE has two co-aligned detector units to cover the spectral band with a sampling of down to 0.1 nm. The on-ground sampling distance is 293 m. The three Detector Units (Teledyne e2v CCD325-80) are high-dynamic low-noise back-illuminated frame-transfer CCDs with 450 (spectral dim.) x 1060 (spatial dim.) pixels of 28 µm (spectral dim.) x 42 µm (spatial dim.).
OHB System AG is responsible for the development of the HR and LR Focal Plane Systems (HR FPS and LR FPS), the Low Resolution Spectrometer, and the Double Slit Assembly (SLITA). Contributions to the development have been as follows: Focal Plane Detector Units were developed by Teledyne e2v. Airbus CRISA developed the Front-End Electronic units for the HR and LR Focal Plane System. These units provide supplies and clocks for the detectors, and adapt and filter the video signal before performing the analog-to-digital conversion with a resolution of 16 bits at a sampling frequency of 1.7 MHz. Fraunhofer IOF (Jena) provided the sophisticated slit devices and components, as well as the primary mirror for the LRSPE. Carl Zeiss (Jena) provided lens, mirror, and the highly efficient, low-straylight grating for the LRSPE.
This paper presents the specific features of these three subsystems as well as their performance characteristics from the running flight-model test campaigns.
The project is funded by ESA under contract number Leonardo S.p.A. Contract
No. 4000118350/FLEX B2-CD/OHB.
The main payload of Sentinel-6 Michael Freilich is a dual-band (Ku and C) pulse-width limited radar altimeter, called Poseidon-4, that transmits pulses at a high pulse repetition frequency thus making the received echoes phase coherent and suitable for azimuth processing. Among the different unique characteristic of Poseidon-4, it is worth recalling that digital pulse range compression is performed on-board to transform the received chirp using a matched filter. Thus, a proper calibration approach has been developed, including both an internal and external calibration.
In particular, this abstract presents the long-term monitoring of the internal calibration data for chirp replica and for attenuator that are processed on ground by ad-hoc tools provisioned by Aresys to ESA:
• CAL1 INSTR: This mode measures the internal instrument transfer function in Ku band and in C band. The results of these measures can be taken into account at Digital compression level in the chirp replica(f) to optimize the impulse response of the instrument.
• CAL ATT: Since amplification gain control knowledge directly impacts the σ0 measurements, an attenuation calibration is included in the design. This measures the top of the range impulse response within the full attenuation dynamic range that is then matched to a corresponding value on ground.
The performance of Poseidon-4 altimeter is here presented by analysis of the long-term monitoring of the on-ground processed data from CAL1 INSTR and CAL ATT calibration sequences commanded on board. The analysis of such calibration data allows to verify that the instrument has reached the requirements and that it is maintaining the key performance over its life. Moreover, in-depth analysis of the calibration data revealed how the instrument depends on its temperature and on the orbit of the satellite.
The vast increase of available Earth Observation data, not the least from the Copernicus Sentinel satellites, but also from commercial providers, has provoked major challenges in providing storage, access, and processing solutions which will only exacerbate with new missions in the foreseeable future. Meanwhile the paradigm is shifting away from scenes and images to extended coverages, datacubes and digital twins. Already today, truly global datasets such as the Copernicus DEM require seamless, high resolution, global grids which push traditional storage and processing concepts to their limits and beyond.
These challenges affect not only data providers but equally need to be coped with by the rapidly expanding sector of Earth Observation Exploitation Platforms and not the least by the users themselves. These actors see themselves confronted with the task to ingest and fuse data of various types, levels, formats, and origins best summarised as “Big Earth Data”.
A prerequisite to cope with these “Big Earth Data” coming from Earth Observation (EO) and other sources will be the integration of existing and future data infrastructures, a task that puts a strong focus on coordination, harmonisation and interoperability of data and services. A multitude of respective standardisation initiatives are underway by various stakeholders such as INSPIRE, OGC, ISO, CEOS, and GEO. Georeferenced grids are a central concern in many of them and the various approaches entail a coordination task of its own.
The intention of this Agora is to gather geospatial data providers and users as well as standardisation experts from different backgrounds, inform each other about specific requirements and current developments by e.g. discussing established grid-based approaches such as WGS84 LatLon, UTM, WMTS and EQUI7 as well as novel concepts such as Discrete Global Grid Systems (DGGS), and to promote a network for coordination and awareness raising for related existing and planned standards.
speakers:
• • Gino Caspari, GeoInsight (Ruhr University Bochum)
• Michael Jendryke, GeoInsight (Ruhr University Bochum)
• Peter Strobl, Senior Scientific Officer, JRC
• Enrico Cadau, Sentinel-2 Mission Management, ESA
• Henning Schrader – Head of Mapping & DEM Development, Airbus Defence and Space
Description :
This will be a discussion session on how to implement the recommendations of the separate meetings on terrestrial carbon, atmosphere carbon and ocean carbon and their interface that will have taken place in Q4 2021 and Q1 2022. The Agora question is ‘How do we convert the recommendations calls for proposal in the context of the EC-ESA Earth System Science Initiative. The discussion will also try to identify the priorities in the short, medium and longer term and look at exploitation of the networking and synthesis activities of the Science Clusters and Science Hub in reinforcing these activities.
Speakers:
Stephen Plummer (ESA)
Gilles Ollier (EC-RTD)/Diego Fernandez (ESA)
Marko Scholze (Lund, SE)
Ana Bastos (MPI-BGC, DE)
Jose Moreno (UV, ES)
Fabienne Maignan (LSCE, FR)
Description:
Introduction to the AVL platform. Demonstration of the implementation of a Science Use case involving raster and feature data and a machine learning application: Testing the transferability of a crop type classification model. We will show the integration of raster data for arbitrary feature data stored in a geoDB, and the computation of statistical distances between domains for different sets of features. Training and validation of the random forest models.
Description:
We all agree since the 70’s that Earth Observation (EO) data is key to understand human activity and Earth changes. However two trends today are forcing us to rethink the use of EO to tackle new challenges:
- Georeferenced data sources, data quantity and quality keep increasing allowing global and regular Earth coverage;
- AI and cloud storage allow swift fusion, analysis and dissemination of these data on online platforms.
Combined together, these two trends generate various reliable indicators. Once fused together, they will allow the anticipation of future humanitarian, social, economic and sanitary crisis and the adequate action plan to prevent them from happening. Satellite imagery, 3D simulation, image analysis, mapping, georeferenced public and privte data… we are getting enough tools to give the Earth a Digital Twin and this is not Science Fiction anymore.
Airbus and Dassault joined forces to approach and reach this ambition focusing on cities. The project aims at automatically building a 3D digital model of cities as well as simulating their entire environment and using them as baseline to digitise impactful events.
During LPS 22, Airbus and Dassault want to explain the reasoning behind that project focusing on the evolution of urbanism but also taking into account environment, population from a sanitary perspective, economy and security layers. What we offer here is a global approach of Earth Observation, not just focusing on one topic, but answering myriad of problems. What better event to explain this advanced and innovative project than LPS 22?
Speaker:
Wendy Carrara (Senior Manager for Digital and European Institutions)
Company-Project:
BioPAL
Description:
• The Biomass Algorithm and Analysis Platform (MAAP) is a new and innovative platform that will be the main interface to the scientific mission community. It will provide access to all Biomass products and will support data discovery and simple analytics. In addition it will provide an infrastructure to access the official processor and elements of it, to develop own code, to process mission data and to validate the outcome. BioPAL, a key element of the MAAP, is an Open Source project to jointly develop the scientific Biomass processor suite.
• In this Demo we will introduce the MAAP and BioPAL and discuss with the audience relevant use cases.
Description:
The Demo will bring you from a global perspective view of the world ocean circulation to some regional maritimes areas highlighting high priority applications that are ocean circulation dependent. Satellite observations will be showcased and explained in context of complementary model and in-situ observations. We will cover four domains of applications addressed by ESA WOC project, being safe navigation, sustainable fisheries, a clean and productive ocean
Description:
A catalog of publicly available geoscience products, datasets and resources developed in the frame of scientific research Projects funded by ESA EO (Earth Observation).