For achieving wind profile coverage at a global scale, the Aeolus satellite equipped with an Atmospheric Laser Doppler Instrument (ALADIN) was launched in 2018 by the European Space Agency (ESA) and has been operated successfully for more than three years now. The wind retrieved by Aeolus is obtained from the Doppler shifted frequency between emitted and detected laser light from Rayleigh scattering from air molecules as well as from Mie scattering from particles (e.g. cloud droplets and ice crystals, dust, aerosols) in the atmosphere (Ingmann and Straume, 2016). The global wind profiles from Aeolus can serve various applications, including further understanding of atmospheric dynamics, improving numerical weather predictions (NWP), tracking air pollutants movement, etc. (ESA, 2020; Banyard et al., 2021; Rennie et al., 2021).
To evaluate the contribution of Aeolus observations to NWP, an experiment with and without Aeolus data assimilation was carried out with the European Centre for Medium-Range Weather Forecasts (ECMWF) model. The results of this so-called observing system experiment (OSE) demonstrated that Aeolus winds are able to improve medium-range vector wind and temperature forecasts, especially over tropical and polar regions (Rennie et al., 2021). However, the impacts on vector wind forecast within the planetary boundary layer (PBL) lack detailed study. Moreover, the applications for wind-related activities and industries, such as wind energy industry, need further scientific investigation. Hence, as a starting point, this study aims at investigating the impact of Aeolus wind assimilation in the ECMWF model on wind forecast within the PBL over Europe. The study is based on a high resolution T639 OSE with 4D variational data assimilation for the June to December 2019 period, i.e., the early FM-B period.
First, we will compare the wind vectors at 10 m, 100 m, 950 hPa and 850 hPa from forecasts of the control experiment (no Aeolus) and the experiment with Aeolus in addition, with the aim to identify the impact of Aeolus on near surface winds for different global regions and forecast ranges. Next, by taking ground-based measurements for reference, including winds from conventional weather stations (Met Office, 2012), buoys (Met Office, 2006) and lidar sites, we will quantify the quality of modelled winds with/without Aeolus data assimilation at different heights, thus determining whether the impact of Aeolus is positive or negative.
The research findings of this study have the potential to provide valuable information for our future work on wind energy applications. Particularly, the prospect of Aeolus winds for wind power prediction as well as offshore wind farm operation and maintenance can be addressed.
Keywords: Aeolus satellite; wind forecast; ECMWF; data assimilation.
Acknowledgement: This study is a part of PhD project Aeolus satellite lidar for wind mapping that is a sub-project of the LIdar Knowledge Europe (LIKE) Innovative Training Network (ITN) Marie Skłodowska-Curie Actions funded by European Union Horizon 2020 (Grant number: 858358). The authors are very thankful for the ECMWF centre for conducting the OSE and providing the data for analysis. The appreciation also goes to the Royal Netherlands Meteorological Institute (KNMI) for hosting PhD student Haichen Zuo for their external research stay during PhD study.
References:
Banyard, T.P., Wright, C.J., Hindley, N.P., Halloran, G., Krisch, I., Kaifler, B. and Hoffmann, L. (2021) ‘Atmospheric Gravity Waves in Aeolus Wind Lidar Observations’, Geophysical Research Letters, 48(10). doi:10.1029/2021GL092756.
ESA (2020) Satellites track unusual Saharan dust plume, The European Space Agency. Available at: https://www.esa.int/Applications/Observing_the_Earth/Satellites_track_unusual_Saharan_dust_plume (Accessed: 28 August 2021).
Ingmann, P. and Straume, A.G. (2016) ‘ADM-Aeolus Mission Requirements Document’. ESA. Available at: https://esamultimedia.esa.int/docs/EarthObservation/ADM-Aeolus_MRD.pdf (Accessed: 22 December 2020).
Met Office (2006) ‘MIDAS: Global Marine Meteorological Observations Data’. NCAS British Atmospheric Data Centre. Available at: https://catalogue.ceda.ac.uk/uuid/77910bcec71c820d4c92f40d3ed3f249 (Accessed: 22 November 2021).
Met Office (2012) ‘Met Office Integrated Data Archive System (MIDAS) Land and Marine Surface Stations Data (1853-current)’. NCAS British Atmospheric Data Centre. Available at: http://catalogue.ceda.ac.uk/uuid/220a65615218d5c9cc9e4785a3234bd0 (Accessed: 21 November 2021).
Rennie, M.P., Isaksen, L., Weiler, F., Kloe, J., Kanitz, T. and Reitebuch, O. (2021) ‘The impact of Aeolus wind retrievals on ECMWF global weather forecasts’, Quarterly Journal of the Royal Meteorological Society, 147(740), pp. 3555–3586. doi:10.1002/qj.4142.
During the last decade, new applications exploiting data from satellite borne lidar measurements demonstrated that these sensors can give valuable information about ocean optical properties. Within this framework, COLOR (CDOM-proxy retrieval from aeOLus ObseRvations) is an on-going (KO: 10/3/2021) 18 month feasibility study approved by ESA within the Aeolus+ Innovation program. COLOR objective is to evaluate and document the feasibility of deriving an in-water AEOLUS prototype product from the analysis of the ocean sub-surface backscattered component of the 355 nm signal.
In fact, although Aeolus’s mission primary objectives and subsequent instrumental and sampling characteristics are not ideal for monitoring ocean sub-surface properties, the unprecedented type of measurements from this mission are expected to contain important and original information in terms of optical properties of the sensed ocean volume. Being the first HSRL (High Spectral Resolution Lidar) launched in space, ALADIN (Atmospheric LAser Doppler Instrument) of ADM-Aeolus gives a new opportunity to investigate the information content of the 355 nm signal backscattered by the ocean sub-surface components. Based on these considerations, COLOR project focuses on the AEOLUS potential retrieval of: 1) Diffuse attenuation coefficient for downwelling irradiance,(Kd [m-1]); 2) Sub-surface hemispheric particulate backscatter coefficient (bbp [m-1]).
To reach COLOR objectives, the work is organized in three phases: Consolidation of the scientific requirements; Implementation and assessment of AEOLUS COLOR prototype product; Scientific roadmap.
The core activity of the project is the characterization of the signal from the AEOLUS ground bin (Δrgrd). In principle, the ground bin backscattered radiation signal is generated by the interaction of the emitted laser pulse radiation with two media (atmosphere and ocean, Bgrd_atm and Bgrd_wat, respectively) and their interface (Bgrd_surf).
To evaluate the feasibility of an AEOLUS in-water product, COLOR proposes to develop a retrieval algorithm that is structured in three independent and consecutive phases:
1) Pre-processing analysis: aimed to identify suitable measurements to be inverted;
2) Estimation of the in-water ground bin signal contribution: aimed to remove contributions to the measured signal from variables other that the in-water ones;
3) Retrieval of in-water ground bin optical properties: aimed to estimate the targeted in-water optical properties.
Two parallel and strongly interacting activities are associated with each step of these phases:
a) Radiative transfer numerical modelling. This tool will be essential to simulate the relevant radiative processes expected to be responsible for the generation of AEOLUS surface bin signal.
b) AEOLUS data analysis. The objective of this activity will be to verify the information content of the AEOLUS ground bin signals and the assumptions for data product retrieval.
The potential AEOLUS in-water product will be then validated through the comparison of statistical properties obtained by analyzing the whole set of data from AEOLUS (at least one year of processed measurements) and the selected reference datasets: Biogeochemical-Argo floats, oceanographic cruises and ocean-colour satellites.
The preliminary results about the above-mentioned activities will be here presented. In particular, the sea-surface backscattering and the in-water contribution of the AEOLUS ground bin have been estimated through numerical modeling. Furthermore, the preliminary experimental data analysis suggests that the observed excess of signal in the AEOLUS ground bin could be related to the signal coming from the marine layers. Analyses are planned in the second phase of these activities to disentangle atmospheric and oceanic signal contribution in the AEOLUS ground bin.
Aeolus is the first Doppler wind lidar (DWL) in space to measure wind profiles. Aeolus is an ESA (European Space Agency) explorer mission with the objective to retrieve winds from the collected atmospheric return signal which is the result of Mie and Rayleigh scattering from the laser emitted light on atmospheric molecules and particulates. The focus of this contribution is on winds retrieved from instrument Mie channel collected data, i.e., originating from Mie scattering of atmospheric aerosols and clouds.
The use of simulated data from Numerical Weather Prediction (NWP) models is a widely accepted and proven concept for the monitoring of the performance of many meteorological instruments, including Aeolus. Continuous monitoring of Aeolus Mie channel winds against ECMWF model winds has revealed systematic errors in retrieved Mie winds. Following a reverse engineering approach the systematic errors could be traced back to imperfections of the data in the calibration tables which serve as input for the on-ground wind processing algorithms.
A new methodology, denoted NWP calibration, makes use of NWP model winds to generate an updated calibration table. It is shown that Mie winds retrieved by making use of the NWP based calibration tables show reduced systematic errors not only when compared to NWP model winds but also when compared to an independent dataset of very high resolution aircraft wind data. The latter gives high confidence that the NWP based calibration methodology does not introduce model related errors into retrieved Aeolus Mie winds. Based on the presented results in this paper the NWP based calibration table, as part of the level-2B wind processing, has become part of the operational processing chain since 1 July 2021.
Clouds play an important role in the energy budget of our planet: optically thick clouds reflect the incoming solar radiation, leading to cooling the Earth, while thinner clouds act as “greenhouse films”, preventing escape of the Earth’s long-wave radiation to space. Cloud response to ongoing greenhouse gases climate warming is the largest source of uncertainty for model-based estimates of climate sensitivity and therefore for predicting the evolution of future climate. Understanding the Earth's energy budget requires knowing the cloud coverage, its vertical distributions and optical properties. Predicting how the Earth climate will evolve requires understanding how these cloud variables respond to climate warming. Documenting how the cloud’s detailed vertical structure evolves on a global scale over the long-term is therefore a necessary step towards understanding and predicting the cloud’s response to climate warming.
Satellite observations have been providing a continuous survey of clouds over the whole globe. Infrared sounders have been observing our planet since 1979. Despite an excellent daily coverage and daytime/nighttime observation capability, the height uncertainty of the cloud products retrieved from the observations performed by these space-borne instruments is large. This precludes the retrieval of the cloud’s vertical profile with the accuracy needed for climate relevant processes and feedback analysis. This drawback does not exist for active sounders, which measure the altitude-resolved profiles of backscattered radiation with an accuracy on the order of 1−100 meters.
All active instruments share the same measuring principle – a short pulse of laser or radar electromagnetic radiation is sent to the atmosphere and the time-resolved backscatter signal is collected by the telescope and is registered in one or several receiver channels. However, the wavelength, pulse energy, pulse repetition frequency (PRF), telescope diameter, orbit, detector, or optical filtering are not the same for any pair of instruments. These differences define the active instruments’ capability of detecting atmospheric aerosols and/or clouds for a given atmospheric situation and observation conditions (day, night, averaging distance). At the same time, there is an obvious need to ensure the continuity of global space-borne lidar measurements (see Fig. 1 for an illustration of currently operating lidars CALIOP and ALADIN and future lidar ATLID). In merging different satellite data, the difficulty is to build a multi-lidar record accurate enough to constrain predictions of how cloud evolve as climate warms.
In this work, we discuss the approach to merging the measurements performed by the relatively young space-borne lidar ALADIN/Aeolus, which has been orbiting the Earth since August 2018 and operating at 355nm wavelength with the measurements performed since 2006 by CALIPSO lidar, which is operating at 532nm and is near the end of its life-time. Even though the primary goal of ALADIN is wind detection, its products include profiles of atmospheric optical properties (aerosols/clouds). As mentioned before, merging the cloud data from a pair of spaceborne lidars is not trivial (see Fig. 2 for differences in observation geometry and local time and consider the differences in wavelength, detector, and measuring techniques).
The planned study consists of the following steps:
(a) developing a cloud layer detection method for ALADIN measurements, which complies with CALIPSO cloud layer detection;
(b) comparing/validating the resulting cloud ALADIN product with the well-established CALIOP/CALIPSO cloud data set;
(c) developing an algorithm for merging the CALIOP and ALADIN cloud datasets;
(d) applying the merging algorithm to CALIOP and ALADIN data and build a continuous cloud profile record;
(e) adapting this approach to future missions (e.g. ATLID/EarthCare).
In the presentation, we show the results of preliminary analysis performed for the first two steps and discuss the future development of this approach.
As part of the Joint Aeolus Tropical Atlantic Campaign (JATAC), radiosondes were launched twice a day from Sal Airport in Cape Verde over a period of 26 days, from 04 to 30 September 2021. Among a total of 38 launches, 10 correspond to Aeolus nearby overpasses. Most of the data were sent to the Global Telecommunication System (GTS) for assimilation in NWP models. The radiosonde temperature, humidity and wind profiles reveal three different dust outbreaks as well as the passage of tropical cyclones that crossed the Sal Island. The 12 radiosonde profiles were vertically aggregated and projected along the horizontal line of sight (HLOS) of Aeolus and compared with the Aeolus measurements, with a collocation criterion ranging from 120 km to 220 km, depending on the orbital node. Error rejection thresholds are identical to those used at ECMWF. The threshold for Mie-cloudy winds is 5 m s−1 and for Rayleigh is 12 m s−1 above 200hPa and 8.6 m s−1 below 200hpa. The radiosonde validation of the Aeolus winds revealed that the quality of the data is closely related to the atmospheric cloud and dust conditions, with the Rayleigh-clear wind values showing larger errors in the presence of aerosols or clouds which can possibly be attributed to the decreasing atmospheric path signal and attenuation effects of clouds/aerosols. Rayleigh winds have a systematic error (bias) of 0.71 m s−1 and a random deviation of 4.48 m s−1 (scaled Mean Absolute Deviation). Mie cloudy winds were more accurate with a systematic error of 0.71 m s−1 and a random deviation of 1.9 m s−1. The statistics obtained from the radiosonde comparisons show lower systematic errors but similar random errors compared to other CAL/VAL studies. Both Rayleigh and Mie channels meet the Aeolus mission requirements of a systematic error less than 0.7 m s−1 , but the random errors are still higher than required.
Optical properties of Californian aging smoke plume retrieved by Aeolus L2A algorithms during long-range transport above Atlantic
Dimitri Trapon¹, Adrien Lacour¹, Alain Dabas¹, Ibrahim Seck¹, Holger Baars², Frithjof Ehlers³, Dorit Huber⁴, ¹CNRM / Meteo France, France, ²Leibniz Institute for Tropospheric Research, Leipzig, Germany, ³ESA-ESTEC, Keplerlaan 1, 2201 AZ Noordwijk, Netherlands; ⁴DoRIT, Munich, Germany.
The ADM-Aeolus satellite from the European Space Agency has been operating since 2018 a Doppler lidar instrument ALADIN (Atmospheric LAser Doppler Instrument). It is the first High Spectral Resolution Lidar (HSRL) operating in the ultraviolet (λ=354,8 nm) in space. Despite being designed for wind profile measurement the Rayleigh and Mie channels of ALADIN can be used to directly retrieve the particle only co-polar extinction and backscatter coefficients. Elevated aerosol layers as Saharan Air Layer (SAL), Polar Stratospheric Cloud (PSC) or Biomass Burning smoke (BB) can then be observed using the L2A Aerosol and Optical Properties product.
In early September 2020, massive smoke plumes from Californian wildfires were transported east across the United States and the Atlantic ocean and observed by various instruments such as Copernicus Sentinel-5p TROPOMI. Smoke residuals were also compared to ground based lidar above western Europe confirming the long range transport above the Atlantic. Aeolus observed directly the Californian smoke through several orbits over a week. The presentation will show the output of the main L2A algorithms and underline observations made on the smoke plume optical characteristics. The co-polar extinction and backscatter coefficients calculated by the Standard Correct Algorithm (SCA) [1] are then analysed in parallel with denoised retrievals given by a newly developed scheme based on physically constrained minimization named Maximum Likelihood Estimation (MLE) [2]. The attenuated backscatter for particles will also be illustrated and compared to the NASA CALIPSO CALIOP product as the depolarization ratio illustrating the role of Black Carbon (BC) and Ice Nucleating Particles (INPs) in fresh smoke as the plume reaches the top troposphere and get contaminated by ice crystals and water droplets.
[1] Flament, T., et al., Aeolus L2A Aerosol Optical Properties Product: Standard Correct Algorithm and Mie Correct Algorithm, Atmos. Meas. Tech. Discuss. [preprint], https://doi.org/10.5194/amt-2021-181, in review, 2021.
[2] Ehlers, F, et al., Optimization of Aeolus Optical Properties Products by Maximum-Likelihood Estimation, Atmos. Meas. Tech. Discuss. [preprint], https://doi.org/10.5194/amt-2021-212, in review, 2021.
During the first three years of the Aeolus mission, the German Aerospace Center (Deutsches Zentrum für Luft- und Raumfahrt e.V., DLR) prepared, implemented and executed four airborne validation campaigns with the support from ESA. After having performed three campaigns in Central Europe and the North Atlantic region around Iceland in 2018 and 2019, DLR carried out the Aeolus VAlidation Through Airborne LidaRs in the Tropics (AVATART) on Sal Island, Cape Verde, in September 2021, as a part of the Joint Aeolus Tropical Atlantic Campaign (JATAC). Already in the 10 years before launch, various ground and airborne validation campaigns had been performed in support of the Aeolus operations and processor development.
These campaigns deployed two lidar instruments: a scanning, high-accuracy coherent Doppler wind lidar (DWL) system working at 2-µm wavelength which acted as a reference by providing wind vector profiles for the Aeolus validation, and the ALADIN Airborne Demonstrator (A2D). Being the prototype of the direct-detection DWL ALADIN (Atmospheric LAser Doppler INstrument) on-board Aeolus, the A2D also consists of a frequency-stabilised ultra-violet laser, a Cassegrain telescope and the same dual-channel optical and electronical receiver design to measure wind speeds along the instrument’s line-of-sight by analysing particulate (Mie) and molecular (Rayleigh) backscatter signals. The combination of both DWLs enabled to explore the ALADIN-specific wind measurement technology under various atmospheric backscatter signal conditions. For the airborne measurements the two instruments were operated in parallel on the DLR Falcon research aircraft in a downward looking configuration.
In the framework of the post-launch airborne validation campaigns, in total 190 flight hours were spent to cover a distance of 26,000 km along the Aeolus track during 31 coordinated underflights in different geographical regions and operational states of the mission. In the tropics, where Aeolus measurements are especially important for improving the numerical weather prediction, AVATART contributed 11 flights, covering nearly 11,000 km of the satellite measurement swath around the Cape Verde archipelago. The latter was chosen as a base, as it allowed to observe tropical dynamics of the Saharan air layer, the African Easterly Jet, the Subtropical Jet and the Intertropical Convergence Zone. In the context of JATAC, the campaign aimed to study the impact of aerosol in the atmosphere on the operational Rayleigh and Mie wind products of Aeolus, specifically the potential errors that arise from crosstalk between the two complimentary receiver channels.
Thanks to the high degree of commonality with the satellite instrument in terms of design and measurement principle, the collocated A2D and 2-µm wind observations acquired during the campaigns provide valuable information on the optimization of the Aeolus wind retrieval and related quality control algorithms. For example, during JATAC the A2D, unlike ALADIN, delivered a broad vertical and horizontal coverage of Mie winds across the Saharan air layer, whereas A2D Rayleigh winds measured in this region, which are affected by Mie contamination through crosstalk, are effectively filtered out. Hence, refinement of the Aeolus wind processor based on the example of the A2D wind retrieval may improve the Aeolus wind data coverage and accuracy.
The paper gives an overview of mission relevant results obtained from pre-launch campaigns to comparative wind observations of Aeolus and the DLR airborne DWLs with a focus on recent findings. Seeing Aeolus-2 currently prepared as a satellite proposal by ESA and EUMETSAT on the horizon, an again successful mission support with an airborne demonstrator can build on this heritage and an accordingly modified 2nd generation A2D.
The convective system (CS) is one of the main dynamical meteorological features over the tropical and subtropical zones. Some CS types, including mesoscale CSs and supercell convective storms, may be classified as natural hazards since they produce extreme weather events like heavy rainfall, strong surface wind speed, and intense lightning. These hazards can significantly impact human life and economic activities. Heavy rain in a short time over some regions may cause severe flash floods, while intense lightning may kill people and damage infrastructures. Likewise, strong surface wind speed may significantly impact many onshore, offshore, and coastal activities such as energy production (more and more related to wind power), marine transportation, and all type of aeronautic activities.
The observation and now-cast of deep convection have been significantly improved in recent years, thanks to the GEOstationary (GEO) satellites, including Meteosat, GEOS, Himawari, and Gaofen, covering Europe, Africa, America, and Asia-Pacific, respectively. In particular, the new GEO generation can image large regions at a high time and space resolution (e.g., with 0.5-2 km pixel spacing for every 5 minutes). However, the observation and prediction of the extreme weather events associated with deep convection is still a big challenge due to the lack of additional in-situ and/or remote sensing data to describe the CS dynamics. For instance, this is particularly significant over the Gulf of Guinea, contrary to the Gulf of Mexico, where there are very few moored buoys, radio-soundings, or weather stations to observe CS vertical dynamics associated with the observed surface convective wind gusts.
Some previous studies [1-3] indicated that the collocation of GEOstationary and Low-Earth Orbit (LEO) satellite data could observe deep convective clouds and the associated vertical and horizontal dynamics. Figure 1 illustrates the observation of a deep convective cloud at the tropopause altitude by Meteosat GEO, intense downdrafts at the mid-levels (respective updrafts to balance the CS internal dynamics) by Aeolus Lidar LEO, and a surface wind pattern at the sea surface by Sentinel-1 (Synthetic Aperture Radar) LEO satellites. The results in [2-3] showed that the three features observed by Meteosat, Aeolus, and Sentinel-1 matched in location and observation time. In particular, the wind hot spots (15-25 m/s) correspond to the coldest cloud patterns (200-210 K brightness temperature) and intense downdrafts.
The collocation of GEO and LEO satellites offers a significant advantage for a deeper understanding of the relationship between deep convective clouds and dynamics within the ICTZ area, particularly over the Gulf of Guinea or in the middle Tropical Atlantic, where there are few observations. In addition, such collocation should lead to combining GEO and LEO data in an associated mode to be assimilated as a single feature rather than a collection of satellite data within Numerical Weather Prediction (NWP) models.
Moreover, GEO images may be used as input data combined with Machine Learning / Deep Learning to predict surface wind gusts. The LEO data, including Sentinel-1, SMAP, ASCAT, Windsat, Aeolus, etc., should be then used for training and validating learning models.
[1] T. V. La, C. Messager, M. Honnorat, R. Sahl, A. Khenchaf, C. Channelliere, and P. Lattes, “Use of Sentinel-1 C-Band SAR Images for Convective System Surface Wind Pattern Detection,” J. Appl. Meteor. Climatol., vol. 59, no. 8, pp. 1321–1332, Aug. 2020.
[2] T. V. La and C. Messager, “Convective system dynamics viewed in 3D over the oceans,” Geophysical Research Letters, vol. 48, pp. e2021GL092397, Feb. 2021.
[3] T. V. La and C. Messager, "Convective System Observations by LEO and GEO Satellites in Combination," IEEE J STARS, doi: 10.1109/JSTARS.2021.3127401.
The resolution of regional numerical weather prediction (NWP) models has continuously been increased over the past decades, in part, thanks to the improved computational capabilities. At such small scales, the fast weather evolution is driven by wind rather than by temperature and pressure. Over the ocean, where global NWP models are not able to resolve wind scales below 100-150 km, regional models provide wind dynamics and variance equivalent to 25 km or lower. However, although this variance is realistic, it often results in spurious circulation (e.g., moist convection systems), thus misleading weather forecasts and interpretation. An accurate and consistent initialization of the evolution of the 3 dimensional (3-D) wind structure is therefore essential in regional weather analysis. The present study is carried out in the framework of the EUMETSAT research fellowship project entitled WIND-4D, which focuses on a comprehensive characterization of the spatial scales and measurement errors for the different operational space-borne wind products currently used and/or planned to be used in regional models. In addition, a thorough investigation and improvement of the 4-D (including time) consistency between different horizontal and/or vertical satellite wind products will be carried out. Such products include the Ocean and Sea Ice Satellite Application Facility (OSI SAF) scatterometer-derived sea-surface wind fields, the Nowcasting and Very Short-Range Forecasting (NWC) SAF Atmospheric Motion Vectors (AMVs), Aeolus and/or Infrared Atmospheric Sounding Interferometer (IASI) wind profiles. Densely sampled aircraft wind profiles (Mode-S) will be used to verify and characterize the satellite products. Moreover, data assimilation experiments of the consistent datasets into the HARMONIE-AROME regional model will be carried out in two different regions, i.e., the Netherlands and the Iberian Peninsula regional configurations.
Regarding the characterization of the spatial scales and measurement errors, the widely used triple collocation (TC) analysis is further developed and adapted for the purpose of this project.
After testing the TC analysis on surface winds using scatterometer measurements over the ocean, buoy observations and NWP output, we extend the analysis to vertical wind profiles, more specifically to Aeolus, Mode-S and NWP output. Aeolus winds are collocated with Mode-S observations and ECMWF model output during a period of 6 months over the Mode-S domain in Western Europe. The spectral integration method and the spatial variances method are used to estimate the representativeness errors of the collocated data sets. The TC analysis is then exploited to characterize the errors of the different sources at different scales. The analysis is performed at different altitudes for both the Mie and the Rayleigh channels.
Finally, experimental 4D wind data from IASI are analyzed to evaluate its possible use as an additional source of wind observations for regional data assimilation.
Improvements in recent years to aerosol observation have enabled some weather forecasting centres to offer projections of aerosol fields up to five days in advance. These data, primarily aerosol optical depth (AOD) measurements, are used at ECMWF in the Copernicus Atmospheric Monitoring Service (CAMS) for atmospheric composition forecasts.
Recent efforts in the research of aerosol assimilation have shown that lidar backscatter data has potential for augmenting AOD information. Lidar backscatter can allow profiling of aerosols, leading to improvements in identifying the vertical structure of aerosol fields, allowing a better forecast of the plume. The ALADIN instrument onboard ESA’s Aeolus mission was primarily designed for measuring wind data, not for aerosol observations. Despite not being optimised for aerosol science, the assimilation of particle backscatter is possible and with some screening the information on aerosol vertical profile can be gleaned through post-processing. This extraction of information on aerosols additionally serves as precursor work for the EarthCARE mission, which, like Aeolus, will host a UV-wavelength lidar operating at 355nm, called ATLID. Unlike ALADIN, the design of ATLID is optimised to provide vertical profiles of aerosols and thin clouds. ATLID will also be equipped with a depolarisation channel, further strengthening the ability to return information on aerosol vertical structure.
This talk will present the latest work from ECMWF’s contribution to the Aeolus Aerosol Assimilation in the DISC (A3D) contract. This project is a continuation to previous work carried out at ECMWF, the A3S project, which successfully demonstrated the feasibility of assimilating the lidar backscatter signal at 355nm using demonstration datasets. Here will be discussed findings on the quality of Aeolus L2A particle backscatter data in the framework of atmospheric aerosol monitoring (e.g. such as that provided by CAMS) and assimilation into composition forecasting experiments. The verification of these experiments with independently measured lidar profiles and other aerosol observations, such as with the ground based AERONET AOD, will be presented.
VirES for Aeolus (https://aeolus.services) provides a highly interactive data manipulation and retrieval web interface for the official Aeolus data products. It includes multi-dimensional visualization, interactive plotting, and analysis. VirES stands for a modern concept of extended access to Earth Observation (EO) data. It supports novel ways of data discovery, visualization, filtering, selection, analysis, snapshotting, and downloading.
The service has been operated for ESA by EOX since the satellite launch in 2018 providing easy insight and analysis of the data. During this time the service has been evolving based on user feedback and keeping up with the science done around the mission.
Although the VirES service provides great insight into the data and has been welcomed by the scientific community around the Aeolus mission, it has become apparent that an additional more flexible environment for sophisticated data interaction would be beneficial. The environment would allow higher level data manipulation and help the community to work and collaborate on the implementation of algorithms to further exploit the data from the Aeolus mission.
The VirES Service will be extended with a Virtual Research Environment (VRE) (https://vre.aeolus.services) beginning of 2022 in order to provide this new data manipulation capabilities to users. In order to prepare the release, requirements and expectations from potential users were already collected during an initial design phase. This helped to adapt the VRE to the individual user demands.
In order to allow users to achieve the full potential of the VRE it is important to provide an extensive set of examples and documentation (https://notebooks.aeolus.services). To help in this activity, as well as to support during the design phase a scientific partner team has been established with LMU and DLR.
The objective of the VirES for Aeolus service, now including the VRE, is to simplify working with Aeolus data so that people unfamiliar with the mission can easier and quicker work with the data as well as to provide powerful tools to already experienced users.
This poster will present the current status of the VirES/VRE project, its design and outlook for 2022 when it will enter its operational phase.
The JATAC campaign in September 2021 on and above Cape Verde Islands has resulted in a large dataset of in-situ and remote measurements. In addition to the calibration/validation of the ESA’s Aeolus ALADIN during the campaign, the campaign featured also secondary scientific objectives related to climate change. The atmosphere above the Atlantic Ocean off the coast of West Africa is ideal for the study of the Saharan Aerosol layer (SAL), the long-range transport of dust, and the regional influence of SAL aerosols on the climate.
We have instrumented a light aircraft (Advantic WT-10) with instrumentation for the in-situ aerosol characterization. Ten flights were conducted over the Atlantic Ocean up to over 3000 m above sea level during two intense dust transport events. Airborne measurements were supported by the ground-based long-term deployment of PollyXT, EvE and Halo lidars and the AERONET sun photometer. The lidars were used to plan the flights in great detail extending the WRF-Chem and CAMS dedicated numerical weather and dust simulations used for the forecasting.
The particle light absorption coefficient was determined at three different wavelengths with Continuous Light Absorption Photometers (CLAP). They were calibrated with the dual wavelength photo-thermal interferometric measurement of the aerosol light-absorption coefficient in the laboratory. The particle size distributions above 0.3 µm diameter were measured with two Grimm 11-D Optical Particle Size Spectrometers (OPSS). These measurements were conducted separately for the fine aerosol fraction and the enriched coarse fraction using an isokinetic inlet and a pseudo-virtual impactor, respectively.
The aerosol light scattering and backscattering coefficients were measured with an Ecotech Aurora 4000 nephelometer. The instrument used a separate isokinetic inlet and was calibrated prior to and its calibration validated after the campaign with CO2. We have measured the total and diffuse solar irradiance with a DeltaT SPN1 pyranometer. CO2 concentration, temperature, aircraft GPS position altitude, air and ground speed were also measured.
The first event in the beginning of the campaign proved to be a very homogeneous Saharan dust layer in space (horizontally and vertically) and time. The second event towards the end of the campaign featured strong horizontal gradients in aerosol composition and concentration, and layering in the vertical direction. These layers often less than 100 m thick, separated by layers of air with no dust.
Complex mixtures of aerosols in the outflow of Saharan dust over the Atlantic Ocean in the tropics will be characterized. We will show the in-situ atmospheric heating/cooling rate and provide insight into the regional and local effects of this heating of the dust layers. These measurements will support of the research on evolution, dynamics, and predictability of tropical weather systems and provide input into and verification of the climate models.
The Joint Aeolus – Tropical Atlantic Campaign (JATAC) has been finally performed in summer/autumn 2021 at the Cabo Verdean Islands. Next to an impressive airborne fleet situated on the island of SAL/Cabo Verde, intense ground-based and airborne is-situ measurements took place on and above Mindelo on the island of São Vicente /Cabo Verde.
After a dedicated orbit change in June 2021, the measurements of ESAs Aeolus satellite were directly performed over Mindelo on each Friday evening providing one prime objective for the research activities. Furthermore, the campaign is dedicated to science studies for, e.g., the EarthCARE and WIVERN mission.
At the Ocean Science Center in Mindelo (OSCM), a full ACTRIS remote sensing super site was set up with instrumentation from different institutions since June 2021. The instrumentation includes a multiwavelength-Raman-polarization lidar PollyXT, an AERONET sun photometer, a Scanning Doppler wind lidar, a microwave radiometer and a cloud radar belonging to ESA fiducial reference network (FRM4Radar). Next to these aerosol, cloud, and wind remote sensing facilities, ESA’s novel reference lidar system EVE, a combined linear/circular polarization lidar system with Raman capabilities, was deployed. It can mimic the observations of the space-borne lidar ALADIN onboard AEOLUS. On top of these ground-based equipment, a light-weight airplane was located at the airport of São Vicente during the intensive campaign in September 2021. It was performing in-situ measurements of the aerosol layers around the island up to an altitude of about 3 km.
During this intensive period in September 2021, very different aerosol conditions were observed above and around Mindelo. Usually, the marine boundary layer up to an altitude of about 1 km was topped by a layer of Saharan dust reaching up to 6 km altitude. The amount and height of the Saharan dust aerosol varied during the 3-weeks campaign, providing a wide variety of aerosol conditions. Finally, volcanic aerosol from the la Palma volcano was observed on São Vicente island in the local boundary layer and partly above.
In this presentation, we want to present first results concerning the validation of the Aeolus products as well as closure studies concerning the aerosol properties around the island of Sao Vicente.
Aeolus aerosol products have been intensively validated for the direct overpasses with the Aeolus-reference system EVE and the PollyXT lidar, allowing to understand also polarization effects of oriented particles and the wavelength behaviour of the observe particles. The inter comparison between the ground-based lidars yielded an excellent agreement, giving confidence that they can act as ground-truth for Aeolus. The first comparisons to the Aerosol products of Aeolus confirmed the finding, that the backscatter coefficient can be well retrieved in the Saharan dust layer. However, due to the low signal return, the lowermost aerosol layers below 2-3 km could partly not be resolved with the current operational algorithm suite.
Additionally, wind observations from the ground-based scanning Doppler lidar will be used to validate the Aeolus wind products above the island. Of special interest will be the investigation, if Aeolus is able to detect so-called Mie winds in the dense Saharan dust layers.
The airborne-situ measurements revealed, that in the beginning of the campaign, the Saharan dust layer was very homogenous in space (horizontally and vertically) and time, while towards the end of the campaign strong horizontal and vertical gradients in aerosol composition and concentration could be found.
As a next step, closure studies between the airborne in-situ and the ground-based measurements will be performed which will lead to an intensive insight in the microphysical aerosol properties. These studies will help to understand the representativeness of the ground-based supersite in the context of the regional aerosol distribution. The results will thus give valuable information for validation activities for Aeolus, but also other missions like EarthCARE, in a region of the world where measurements are sparse.
In this context, another intensive ASKOS campaign is planned for spring/summer 2022 on the São Vicente Island, comprising a bigger instrument suite and covering the prime Saharan dust outbreak season.
The ASKOS team:
Holger Baars(1), Eleni Marinou(2), Peristera Paschou(2), Griša Močnik(3), Nikos Siomos(2), Ronny Engelmann(1), Annett Skupin(1), Johannes Bühl(1), Razvan Pirloaga(4), Cordula Zenk(5),(7), Samira Moussa Idrissa(6), Daniel Tetteh Quaye(6), Desire Degbe Fiogbe Attannon(6), Eder Silva(7), Elizandro Rodrigues(7), Pericles Silva(7), Sofia Gómez Maqueo Anaya(1), Henriette Gebauer(1), Martin Radenz(1), Moritz Haarig(1), Athina Floutsi(1), Albert Ansmann(1), Bogdan Antonescu(4), Dragos Ene(4), Lukas Pfitzenmaier(8), Ewan O’ Connor(9), Patric Seifert(1), Ioanna Mavropoulou(2), Thanasis Georgiou(2), Christos Spirou(2), Eleni Drakaki(2), Anna Kampouri(2), Ioanna Tsikoudi(2), Antonis Gkikas(2), Emmanouil Proestakis(2), Luke Jones(10), Luka Drinovec(3), Uroš Jagodič(11), Blaž Žibert(11), Matevž Lenarčič(12), Anca Nemuc(4), Birgit Heese(1), Dietrich Althausen(1), Angela Benedetti(10), Ulla Wandinger(1), Doina Nicolae(4), Pavlos Kollias(2), Vassilis Amiridis(2), Rob Koopman(13), Jonas Von Bismarck(13), Thorsten Fehr(14).
The ASKOS institutions:
1 Leibniz Institute for Tropospheric Research (TROPOS), Leipzig, Germany
2 National Observatory of Athens (NOA), Athens, Greece
3 University of Nova Gorica, Ajdovščina, Slovenia
4 National Institute of Research & Development for Optoelectronics, INOE, Magurele, Romania
5 GEOMAR Helmholtz Centre for Ocean Research Kiel, Kiel, Germany
6 Atlantic Technical University (UTA), Cape Verde
7 Ocean Science Centre Mindelo (OSCM), Mindelo, Cape Verde
8University of Cologne, Cologne, Germany
9Finnish Meteorological Institute (FMI), Finland
10European Center of Medium-Range Weather Forecast (ECMWF), Reading, UK
11 Haze Instruments d.o.o., Ljubljana, Slovenia
12 Aerovizija d.o.o., Vojsko, Slovenia
13 European Space Agency (ESA-ESRIN), Frascati, Italy
14 European Space Agency (ESA), Noordwijk, The Netherlands
The Atmospheric Laser Doppler Instrument (ALADIN) onboard Aeolus is the world’s first space-based Doppler wind lidar to acquire global wind profiles. ALADIN operates at 355 nm and its design is optimized for wind observations, however, cloud and aerosol information can also be retrieved from the attenuated backscatter signals. Using a variation of the High Spectral Resolution Lidar technique (HSRL), two main detection channels are used, a `Mie ‘-channel and a `Rayleigh’-channel. ATLID (Atmospheric Lidar) is the lidar to be embarked on the Earth Clouds and Radiation Explorer (EarthCARE) mission. ATLID is a HSRL systems which is optimized exclusively for cloud and aerosol observations.
Even though ALADIN has a lower spatial resolution, lower signal to noise ratio (SNR), and no depolarization channel in comparison to the ATLID instrument , we can still adapt the ATLID L2 retrieval algorithms developed for the EarthCARE mission, the ATLID feature mask (A-FM) and ATLID profile retrieval (A-PRO) algorithms, to the ALADIN data. The algorithms are being implemented in the operational Aeolus L2A processor (called AEL-FM and AEL-PRO). AEL-FM and AEL-PRO are focused on the challenge of making accurate retrievals of cloud and aerosol extinction and backscatter profiles specifically addressing the low SNR nature of the lidar signals and the need for intelligent binning/averaging of the data. AEL-FM and AEL-PRO use the attenuated Mie and Rayleigh backscatter signals derived only from the Mie spectrometer measurements. Therefore, we also developed an algorithm to calibrate the Mie and Rayleigh signals and perform the cross talk correction.
We have tested AEL-FM and AEL-PRO using Aeolus L1b data for a large number of orbits. So far the AEL-PRO extinction profiles have been compared to CALIPSO retrievals for biomass burning and dust aerosol cases. In this presentation, we will focus on the AEL-FM and AEL-PRO products and the comparison to CALIPSO and/or AERONET data for various aerosol, cloud and Polar Stratospheric Cloud cases.
Australian “Black Summer” megafires have resulted in an unprecedented and persistent perturbation of stratospheric aerosol and gaseous composition, radiative balance and dynamical circulation (Khaykin et al., 2020). One of the most striking repercussions of this event was the generation of a synoptic-scale anticyclone that formed around a massive cloud of smoke in the stratosphere and persisted for 3 months. This phenomenon, termed the Smoke-Charged Vortex (SCV) acted to confine the fire plume, maintaining absorptive smoke aerosols at high concentration, which led to a rapid solar-driven ascent of combustion products up to 35 km altitude.
The SCV anticyclone was identified by the ECMWF Integrated Forecasting System (IFS) through assimilation of satellite temperature profiling, in particular the GNSS radio occultations (RO). Since the SCV occurrence was largely limited to the southern extratropics, where the meteorological radiosounding network is particularly sparse, there exist very few observations of wind velocity inside the anticyclone. The ESA Aeolus space-borne Doppler lidar is a unique sensor to provide the direct measurements of this atmospheric phenomenon at full scale.
Here we present the Aeolus observational perspective on the SCV during its early stage using L2B Rayleigh and Mie wind profiling compared to ECMWF ERA5 and IFS (re)analysis. We also use the Aeolus L2A cloud/aerosol product to identify the associated smoke cloud in comparison with collocated CALIPSO satellite lidar observations. By analyzing the wind and temperature variances derived from Aeolus and GNSS-RO respectively, we put in evidence and discuss the generation of gravity waves by the SCV anticyclone and their vertical propagation.
References
Khaykin S., B. Legras, S. Bucci, P. Sellitto, L. Isaksen, F. Tence, S. Bekki, A. Bourassa, L. Rieger, D. Zawada, J. Jumelet, S. Godin-Beekmann.: The 2019/20 Australian wildfires generated a persistent smoke-charged vortex rising up to 35 km altitude. Nat. Comm. Earth Environ. 1, pp. 22, 2020
With Aeolus now in its fourth year of successful operation, valuable wind measurement data is still being provided by its instrument ALADIN (Atmospheric LAser Doppler Instrument) to the Global Observing System (GOS) with a significant positive impact on numerical weather prediction (NWP). This important contribution throughout the mission was made possible by continuous improvements to the data processors in updated baselines that accompanied the entire mission, leading, for example, to the implementation of the essential bias correction scheme. These upgrades were based on a continuous validation of the Aeolus measurements using NWP model data or reference instrument measurements to determine their systematic and random errors.
In order to monitor the changes in the various processor updates and their influence on the data processed with these new processing baselines, the radar wind profiler network of the German weather service makes an important contribution specifically for the region of Germany.
The network consisting of four UHF radar wind profilers operated at 482 MHz provides wind observations in clear air as well as in particle-laden regions up to 16 km altitude with high accuracy on a 24/7 basis. Covering six weekly Aeolus orbits the four sites measure enough data for creating long-term statistics of Aeolus observation biases and random errors revealing also possible instrument degradations.
While this performance monitoring is based on operational near real-time (NRT) data from Aeolus, the radar wind profiler measurements are also used to analyze reprocessed data sets and their improvements compared to NRT data. This provides important insights for future reprocessing to maximize the quality of Aeolus measurements based on further processor improvements.
Since these newly processed data sets represent a homogeneous data set, additional investigations on the dependence of the bias on e.g., height, range bin thickness and wind speed were performed. By processing with the same baseline, influences of different processor versions on the data quality can be excluded.
The presented work gives an overview of the long-term validation of Aeolus wind measurements above Germany based on radar wind profiler observations. The analysis of systematic and random errors of the entire mission as well as comparisons between the operational and reprocessed data sets are shown.
Background
Aquatic land cover represents the land cover type that is significantly influenced by the presence of water over an extensive period of a year. Monitoring Global Aquatic Land Cover (GALC) types plays an essential role in preserving aquatic ecosystems and maintaining the ecosystem service they provide for humans. Currently, a number of GALC datasets have been produced thanks to the availability of free and open Earth Observation (EO) data and cloud-computing platforms. However, map users are confronted with prominent inconsistencies and uncertainties when applying existing GALC datasets in different fields of research (e.g. climate modelling, biodiversity conservation) due to the lack of a uniform and applicable aquatic land cover characterization framework. In addition, as aquatic ecosystems are complex and dynamic in nature, the sustainable management of aquatic resources requires spatially explicit information on both the vegetation types and water presence. However, previous GALC mapping has been focused on water bodies, and an up-to-date and thematically detailed GALC product characterizing the water and vegetation collectively is still lacking.
Objectives
In this study, our main objectives are:
1) Developing a comprehensive aquatic land cover characterization framework that not only ensures the consistency in GALC mapping but also serves the needs of multiple users (e.g. climate users, sustainable water resource management users) interested in different aspects of aquatic lands.
2) Assessing the applicability of the proposed framework by developing a prototype GALC database based on existing datasets, and identifying the gaps of current datasets in GALC mapping.
3) Improving the global mapping of various aquatic land cover types by exploiting multi-source EO data.
Methodology
To better understand the user needs, we reviewed 33 existing GALC datasets (Xu et al. 2020). The major user groups and user requirements were identified from the citing papers of these datasets and international conventions (e.g., Ramsar Convention), policies (e.g., Sustainable Development Goals), and agreements (e.g., Paris Agreement) in relation to aquatic ecosystems. Based on the identified user needs and the United Nations Land Cover Classification System (LCCS, Di Gregorio 2005), a new GALC characterization framework was formulated.
Then, eight out of the reviewed 33 GALC datasets were harmonized and integrated to construct a prototype GALC database for the year 2015 at a 100m spatial resolution conforming with the proposed GALC characterization framework (Xu et al. 2021). By performing an independent validation on the prototype database, the limitations of current datasets towards GALC mapping were systematically analyzed. To demonstrate the applicability of the prototype GALC database, potential use cases were discussed using maps provided by the database.
Finally, making use of the reference dataset provided by the Copernicus Global Land Service Land Cover map at 100m (CGLS-LC100) project as well as multi-source EO data including optical (e.g., Sentinel-2), Synthetic Aperture Radar (SAR, e.g., Sentinel-1 and ALOS/PALSAR), and various ancillary datasets (e.g., Global Ecosystem Dynamics Investigation (GEDI) forest height, climate, topographic, and soil), an improved mapping of global aquatic land cover types was conducted on the Google Earth Engine (GEE) platform.
Results and discussions
Our literature review showed that users of GALC datasets require a multitude of water-related information such as the water persistence and vegetation type, while none of the current datasets could provide such comprehensive information. Based on the user needs and the ISO-certified LCCS, the proposed GALC characterization framework comprised three levels, of which Level-1 identified aquatic land cover as a whole, representing the discrimination of aquatic and non-aquatic lands. At Level-2, five classifiers were adopted: the persistence of water - the duration of water covering the surface; the presence of vegetation - the existence or absence of vegetation; the artificiality of cover - whether or not a land cover is managed by humans; the accessibility to the sea - the distance to the ocean; and the water salinity - the concentration of Total Dissolved Solids (TDS). At Level-3, vegetated and non-vegetated types were further specified into more detailed classes by the life form classifier. This level-by-level and classifier-by-classifier design is flexible enough to allow users to adapt the framework for their specific applications.
The created prototype GALC database for 2015 included six maps at three levels at a 100m-resolution (Figure 1). Our independent and quantitative accuracy assessment showed that the Level-1 map tended to overestimate the general extent of global aquatic land cover. The Level-2 maps were good at characterizing permanently flooded areas and natural aquatic types, while accuracies were poor in mapping temporarily flooded and waterlogged areas as well as artificial aquatic types. The Level-3 maps could not sufficiently characterize the detailed life form types (e.g., trees, shrubs) for aquatic land cover. However, the prototype GALC database was flexible to derive user-oriented maps for hydrological or climate modelling and global land change monitoring.
Based on the feature combination derived from Sentinel-1, Sentinel-2, ALOS/PALSAR mosaic, and ancillary datasets, our best classification model achieved an overall accuracy of 83.2% in mapping global aquatic land cover. The spaceborne satellite optical and SAR data played a key role in characterizing various aquatic land cover types, of which optical features provided by Sentinel-2 imagery were of higher importance than other data. Sentinel-1 SAR data and the ALOS/PALSAR mosaic exhibited remarkable potential in improving the identification of short vegetation (e.g., herbaceous cover) and trees in aquatic areas. Ancillary datasets such as the GEDI forest canopy height dataset and soil data improved the mapping of trees and bare/sparsely vegetated aquatic lands, respectively.
The LCCS-based GALC mapping framework proposed in this study can help to standardize the way describing aquatic land cover, and thus it is promising in bridging the gap between user needs and various GALC datasets. The interaction among water, vegetation, and wet soils makes aquatic land cover types more difficult to characterize using the same way applied to general Global Land Cover (GLC) mapping, most of which used single-sensor satellite data (ESA, 2017). The recently released 10m-resolution WorldCover 2020 GLC product (Zanaga et al., 2021) was created based on both Sentinel-1 and Sentinel-2 data, while some aquatic areas were reported to be mapped with low accuracies. Our research represents an important step forward in the high-resolution and more accurate global mapping of comprehensive aquatic land cover types. With evolving earth observation opportunities such as the launching of the BIOMASS mission, which will carry a full polarimetric P-band SAR, limitations in the current GALC characterization can be addressed in the future.
Keywords: Aquatic land cover, Global mapping, Characterization framework, 10m-resolution, Multi-source EO data.
References
Di Gregorio, A., 2005. Land cover classification system: classification concepts and user manual: LCCS. Food & Agriculture Organization.
ESA (2017). Land Cover CCI Product User Guide Version 2. Available online: https://maps.elie.ucl.ac.be/CCI/viewer/download/ESACCI-LC-Ph2-PUGv2_2.0.pdf.
Xu, P., Herold, M., Tsendbazar, N.-E., & Clevers, J.G.P.W., 2020. Towards a comprehensive and consistent global aquatic land cover characterization framework addressing multiple user needs. Remote Sensing of Environment, 250, 112034.
Xu, P., Tsendbazar, N.-E., Herold, M., & Clevers, J.G.P.W., 2021. Assessing a Prototype Database for Comprehensive Global Aquatic Land Cover Mapping. Remote Sensing, 13.
Zanaga, D., Van De Kerchove, R., De Keersmaecker, W., Souverijns, N., Brockmann, C., Quast, R., Wevers, J., Grosu, A., Paccini, A., Vergnaud, S., Cartus, O., Santoro, M., Fritz, S., Georgieva, I., Lesiv, M., Carter, S., Herold, M., Li, L., Tsendbazar, N.-E., Ramoino, F., Arino, O., 2021. ESA WorldCover 10 m 2020 v100. Zenodo. https://doi.org/10.5281/ZENODO.5571936.
"Buy land, God doesn't create any more." This aphorism by Mark Twain can be considered as the guiding principle for humankind, evident from the steadily increasing global land consumption. Agricultural land use grew by a global average of 3.8 MHa per year during 1961–2007, most of which was distributed among the developing countries where the rate continued to increase even during 1990–2007. The term “land grab” was coined in this context, and is related to the increased number of large-scale and commercial land deals by international actors. Two months after the hurricanes Irma and Maria hit Barbuda in 2017, the construction of a new international airport led to accusations of degrading the Codrington Lagoon National Park and contravening the conventions of the Ramsar Program. Scientists have analyzed the aftermath with respect to historical legacies, disaster capitalism, manifestation of climate injustices and green gentrification. However, no attempt has been made to quantify and allocate land use and land cover change (LULCC) of Barbuda before and after the 2017 Hurricane disasters. Remote sensing data and volunteered geographic information were analyzed to detect the potential changes in natural LULC related to human activities. We processed Sentinel-1, Sentinel-2, NOAA VIIRS data, MODIS Terra, and PlanetScope data, and obtained data from the OSM archive via the Ohsome API and Twitter via twarc2 API. We observed that human-induced LULCC is occurring on different sites on the island, with decreased activities in Codrington, but increased and ongoing activities leading to a LULCC in Coco Point and Palmetto Point. In total, 2.97 km2 of new areas that are covered by “bare soil and artificial surfaces” fell into the natural reserve of the Ramsar site of the Codrington Lagoon. With an accuracy of 97.1 %, we estimated a total increase of vegetated areas by 6.56 km2 and a simultaneous increase in roads and buildings with a total length of 249.67 km and a total area of 1.43 km2; this includes the area under construction of the central international airport. The satellite classification measures an area of ~1.09 km2, which is ten times the combined sum of all the buildings mapped with the OSM. The vegetation condition itself depict a steady decrease since 2017. While some places show a decrease in human activity, such as Codrington and the Lighthouse Bay Resort, other places experienced increased human activities. They became new nighttime light radiance hotspots on the island. Since these hotspots were the sites of the Barbudan Ocean Club, the dispute along the human-induced LULCC in the aftermath of the 2017 Hurricanes will continue.
Wetlands are globally threatened by degradation and disappearance under the combined effects of increasing anthropogenic disturbances and climatic extremes. These pressures may drive abrupt shifts in wetland ecosystem dynamics, which necessitate robust long-term monitoring techniques for their study. Here, we used a piece-wise regression model to characterize long-term Ecosystem Change Types (ECT) in dynamic wetland surface water and vegetation proxies (e.g., Modified Normalized Difference Water Index (MNDWI) and Normalized Difference Vegetation Index (NDVI)) calculated from twenty years (2000¬¬ – 2019) of MODIS and Landsat time series imagery over the Inner Niger Delta in Mali. In addition, we investigated the added benefits of using a dense Landsat time series for our segmented trend analysis by comparing the class-specific accuracies of the detected ECTs with those produced on the MODIS scale. We developed a reference dataset by validating temporal trajectories at selected probability sample locations based on the TimeSync logic. Our results have shown statistically significant (p < 0.05) long-term trends in wetland surface dynamics, along with higher overall, user’s and producer’s accuracies for the Landsat ECT map (OA = 0.89 ±0.01), surpassing that of the MOD09A1-derived product (OA = 0.37 ±0.03). This study demonstrated a robust approach for long-term wetland monitoring that highlights the benefits of using time-series imagery with spatial resolutions on the Landsat scale for accurate quantifications of linear and non-linear ecosystem responses in vast highly diverse floodplain systems. Investigation into the transferability of our framework to other wetland types is subject to ongoing work. Delivering such improved assessment that better resolves the spatial and temporal characteristics of wetland ecosystems has the potential to support the information needs of global conservation and restoration efforts.
Floodplains account for nearly one-third of the Amazon basin. Especially the young, early successional white-water alluvial forests (várzea) are amongst the most productive ecosystems on our planet. Low-várzea is among the most productive forest types in the world. It is characterized by densely growing mono-specific stands and high flooding pressure. Species abundance decreases with decreased flood duration and depth, whereas species diversity increases likewise. While the várzea forests along the main river channel have been widely researched since the late 1980s, little is known about the extent and dynamics of the inundation within várzea forests along the Juruá, a major tributary to the Amazon main stem.
This study mapped spatio-temporal floodplain dynamics on a subset of the Juruá river floodplain. The objective of the research study was to determine the extent and duration of inundation along the Juruá River, from which metrics of biodiversity and productivity can be derived. Data from Copernicus Sentinel-2 and ALOS-2/PALSAR-2 were explored. Furthermore, the study assessed the applicability of microwave remote sensing, especially PolSAR to tropical floodplain land cover and inundation mapping.
The study is divided into three main steps. In a first step, land cover classification is performed based on a Sentinel-2 segmentation. A random forest model was trained using Sentinel-2 data and another one using multi-temporal PALSAR-2 PolSAR products of the Yamaguchi four-component decomposition and the Shannon Entropy. The land cover classification results were inter-compared. The forest objects were re-segmented in a second step using the Yamaguchi four-component double-bounce component of the PolSAR data in order to allow a better distinction between floodplain and upland forest. Subsequently, the floodplain extent was mapped using only a PALSAR-2 PolSAR time-series, relying on multi-temporal metrics of the L-band backscatter and some polarimetric products (Yamaguchi four-component and Shannon Entropy). The third step consisted of the estimation of the inundation duration of the objects within the floodplain area and was performed using an L-band HH time-series in combination with in situ water level information.
The study area was covered by forest on 3411.17 km², from which 1103.97 km² were flooded at high water level. Small herbaceous vegetation, bare soil, and open water covered 46.19 km², 18.95 km², and 63.28 km², respectively. For the land cover, the random forest algorithm achieved an estimated overall accuracy of 97.4 %. The floodplain extent mapped by the PolSAR random forest model was only slightly more accurate than an L-HH threshold-based classifier (90.3 % and 89 %, respectively). Inundation duration varied from 27 to 338 days per year. Most objects were inundated for 111 days and 53 days. Inundation depth ranged from 7.6 cm to 1342.9 cm. According to the findings in the literature with a maximum inundation depth of 3 m for high-várzea, a total area of 700.68 km² is covered by high-várzea, and the remaining 221.37 km² by low-várzea. High-várzea was inundated for a time period of up to 163 days. This is about one month longer than observed in a similar study along the Solimões river in the central Amazon by Ferreira-Ferreira et al. (2015).
To further investigate the distinction of high-várzea and low-várzea, spaceborne LiDAR derived canopy height and vegetation structure information could be used. Low-várzea is characterized by shorter tree species (30 m – 35 m), whereas in high-várzea individual trees of up to 45 m canopy height are reported. The results will help to identify biodiversity hotspots and the monitoring of floodplain forests in the Amazon.
References:
Ferreira-Ferreira, J., Silva, T.S.F., Streher, A.S., Affonso, A.G., Almeida Furtado, L.F. de, Forsberg, B.R., Valsecchi, J., Queiroz, H.L., & Moraes Novo, E.M.L. de (2015). Combining ALOS/PALSAR derived vegetation structure and inundation patterns to characterize major vegetation types in the Mamirauá Sustainable Development Reserve, Central Amazon floodplain, Brazil. Wetlands Ecology and Management, 23, 41–59.
Wetlands are essential ecosystems that provide a variety of services to humans and the environment. In recent years, wetlands have been impacted by climatic and human drivers, requiring a deeper understanding of the resilience of these essential ecosystems to change. Water delineation is essential to understand how water availability changes and wetland monitoring has improved thanks to recent developments in remote sensing. This is especially the case in Sweden, one of the European countries with the highest amount of wetland water surface extent. However, quantifying wetland water extent and its changes remains a challenge. Standard detection of water surfaces by optical sensors can only recognize open waters, missing water below vegetation. Here, we used a multi-sensor approach utilizing different polarizations of Synethtiuc Aperture Radar SAR and the optical data time series of ESA Sentinel-1&2 satellites during two seasons to identify these waters and their changes in 9 Swedish wetlands of the Ramsar Convention. After pre-processing SAR images and filtering cloudy images of Sentinel-2 in the cloud computing platform of Google Earth Engine (GEE), we created composite images from three different layers: the radar backscattering coefficient, radar polarization difference, and Normalized Difference Vegetation Index from the optical image. Then we took advantage of the machine learning K-means clustering method to detect the increased backscatter due to the double-bounce of the radar signal to recognize water below vegetation. We also investigated the increase in interferometric coherence as an indicator of submerged vegetation. As a result, we obtained water inundation frequency maps for the Ramsar wetlands and compared our results with filed data and hydroclimatic data. Our approach identified on average around 20 percent of areas with water below vegetation that optical-only techniques missed, allowing us to delineate water extent in Swedish wetlands better. We recommend integrating polarimetric features of Radar data, optical data, and interferometry to account for wetland surface water extent and its changes completely, thereby improving surface water quantification
The project MONEOWET focuses on multispectral and hyperspectral Earth Observation (EO) data to investigate water quality in relation to agricultural activities within the Térraba Sièrpe Wetland in Costa Rica. This study corresponds to an initiative focused on investigating the applicability of remote sensing data in tropical systems. The main topic of this project is the use of EO data to assess the impacts and dynamics of agricultural activities on the sensitive RAMSAR wetland ecosystem Térraba Sièrpe at the mouth of the Térraba and Sièrpe rivers. One goal of this project is to develop a first EO database and define analytical methods for water quality studies in that area and beyond. The results will provide a deeper insight into the processes of the entire wetland ecosystem and may help to detect harmful damage to the fragile environment caused by surrounding agricultural activities. The long-term goal is sustainable water and land use management that is exemplary for many other tropical wetlands in Latin America.
Scientists from Germany and Costa Rica are working together to collect data with established (e.g. Sentinel 2, Landsat 8) and new Earth Observation sensors (e.g. DESIS on the ISS) to assess water quality parameters and link these parameters to agricultural land use in the surrounding area. The common goal of the project is to evaluate the applicability of Landsat 8, Sentinel-2 and DESIS multi- and hyperspectral satellite imagery for water quality studies in tropical environments.
Field campaigns were carried out during wet season (November 2018 and November 2019) and dry season (March 2019 and March 2021). The sampling sites for in-situ measurements were taken in the three main meanders of the Sièrpe River and the main meander of Térraba River within the wetland. At each sampling site, the spectral signature of the river was recorded using an Ocean Optics Sensor System (OOSS). The multispectral (Sentinel 2, Landsat 8) and hyperspectral EO (DESIS) data were atmospherically corrected to Bottom-of-atmosphere (BOA) reflectance using Sen2cor (ESA) and PACO (Python-based Atmospheric Correction, DLR), respectively. The WASI-2D inversion method, a semi-analytical model, which retrieves the optically active water quality variables: chlorophyll, total suspended matter (TSM) and colored dissolved organic matter (CDOM) was used and parameterized with site - specific inherent optical properties (SIOPs) of the area and applied to time series of L2A Sentinel, Landsat 8 and DESIS images. Some of the Sentinel-2 and Landsat overpasses were coincident with available field data, however DESIS images could not be obtained during field campaigns, thus only a qualitative evaluation is presented. Although cloud cover in the tropics is a major challenge, the influence of thin clouds could be corrected and the concentrations of TSM and CDOM could be derived quantitatively. Chlorophyll could not be derived reliably in most areas, in particular not from Landsat 8, most likely because its concentration was relatively low and water absorption was dominated by CDOM. The high temporal dynamics of the river system, which is strongly influenced by tides, makes comparison of satellite data collected at different times very difficult, as is comparison with field data. Nevertheless, Sentinel 2-derived maps of water constituents and corresponding Landsat 8 and DESIS images show good agreements in the average concentrations of TSM and CDOM concentration and plausible spatial patterns, and field measurements show that they are in a plausible range. The results indicate that under favorable observational and environmental conditions, the applied atmospheric correction and the used retrieval algorithm are suitable to use DESIS, Sentinel 2 and Landsat 8 data for mapping TSM and CDOM in tropical environments, while chlorophyll is challenging. Their quantitative determination by satellite is therefore an important contribution of this project to the ecological assessment of the waters and the surrounding environment of the study area.
When autumn changes to winter in northern latitudes, the wetland methane emissions are suppressed due to low temperatures and freezing of the soil. During the period of change from non-frozen autumn state to fully frozen winter state, however, methane is still emitted to the atmosphere and the emissions are potentially significant in relation to the total annual budget. A longer freezing period might indicate higher emissions out of the growing season. The length of the freezing period and corresponding methane emissions may differ in permafrost and non-permafrost regions, as well as in different vegetation zones. We estimate the methane fluxes at northern latitudes in Eurasia and northern North America with Carbon Tracker Europe – CH4 (CTE-CH4) atmospheric inversion model and combine the results with soil freeze data from satellite (SMOS) to find out whether there are significant late autumn season emissions and whether they continue throughout the period when soil freezes to winter state. We investigate the emissions in permafrost, discontinuous permafrost and non-permafrost regions and in regions divided by different vegetation types and climate subgroups. CTE-CH4 optimizes both anthropogenic and biospheric fluxes, and the current in situ observation network at northern latitudes enables spatially explicit flux estimates. Fluxes are solved in weekly time resolution, enabling the follow-up of soil freeze development.
Satellite remote sensing provides data for the observation of spatial and temporal conditions and changes of environment including vegetation, soil and atmosphere. The immanent part of satellite remote sensing applications are in-situ measurements. Due to combine these two kinds of data it is possible to achieve new possibilities. Despite the undoubted advantages of space-borne data, such as a large area of analysis and regular data provision, these data require verification. In-situ measurements provide data for validation and calibration of the satellite data and models performed by using the satellite data. On the other hand, in-situ measurements are often sparse and locally representative. Thus, in-situ measurements are required in order to interlink the sensor’s signal to actual situation while satellite data enables to use field data in wider contest.
One of the areas of interest of studies performed around the world are wetland and grasslands areas. They are one of the most important ecosystems on Earth. As a reservoir of biomass and CO2, wetlands interact with climate change. What is more, the wetlands and grasslands improve water quality, recharge groundwater, make for habitat for many animals and plants and maintain biodiversity. In this connection, regular and large-scale monitoring of wetlands and grasslands is highly required.
There are many parameters which are carried out over wetlands and grassland: CO2 fluxes, leaf area index (LAI), surface temperature, air temperature, soil moisture and chlorophyll. Furthermore, novel techniques such as chlorophyll and spectral are used to be measured.
The critical factor for getting an accurate characterization of the test site is sampling strategy. The main objective of this research is to present various method of in-situ measurements of spectral reflectance, chlorophyll fluorescence, LAI, APAR and fAPAR, CO2 fluxes, land surface temperature, air temperature, soil moisture, chlorophyll, biomass, height of vegetation and soil temperature in respect to satellite remote sensing measurements. Different methods including linear and cross transects methods as well as our own square IGIK methods were analysed. Moreover, this study aim at pointing limitation and feasibility of single measurements (spectra response, LAI etc.). Harmonization and orderliness of in-situ measurements techniques were the motivation to perform this study.
Biebrza National Park and its buffer zone was the study area of the research. This is the largest National Park in Poland with a total area of 59223 ha. The area of the buffer zone is 66824 ha. In the Biebrza National Park there are water, marsh, peats, rushes, as well as forest communities (alder, birch, riparian forests). The Biebrza National Park covers a large part of the Biebrza Valley - a great depression of the area over 100 km long.
Measurements were carried out at 26 test sites including grasslands (12 points), sedges (12 points) and reeds (2 points). Three kinds of vegetation were distinguished due to their different anatomy, growth, time of cuts and size. What is more, further analysis (eg. modelling) were performed separately for these three kind of vegetation. Lesser number of test reeds test sites was caused by limited possibilities to access to such areas. Five test sites were located outside the Park and its buffer zone but they were located no further than 5 km. Test sites were chosen according to their distance from the Biebrza River, differences in soil moisture and intensity of vegetation. The goal of well distribution of points was to observe the variability of vegetation and its spatial differentiation. Data have been collected from 2016 to 2020 during the vegetation season (April-October). There were performed at least four campaigns per year. In situ measurements were synchronized with acquisition of Sentinel-2 or Sentinel-1 (+- three days if there was no rain). They were carried out between 9 a.m. and 5 p.m.
The field measurements were carried out according to three schemes: the linear transect, the cross transect and the square IGIK methods. The linear transect consists of 7 – 9 measurement points. Distances between points were ca. 50 – 80 m. The transect in a straight line. The cross transect consisted of 11 measurement points. Distances between points were ca. 10 m. The recordings of square IGIK methods were taken in the north, south, east, and west corners at 80-m intervals to capture all variation. Vegetation sampling was designed in a square shape and samples were taken at different locations under representative conditions.
The results obtained indicate that each sampling scheme has advantages and disadvantages. The values of the measured variables vary between 5-15%. The greatest differences in values between the measured wetlands are found for soil moisture (14.5%) and LAI (13.9%) measurements, while the most similar results were obtained for spectral reflectance (5.4%) and chlorophyll content (6.6%). Methods should be selected appropriately depending on the biophysical parameter under study.
To sum up, measurements have been performed in respect to four methods: the linear transect, the cross transect, the square IGIK methods and single measurements. It is supposed that parameters which variability (LAI, soil moisture) was high should be collected from many samples across the test site. On the other hand, parameters characterized by low variability (spectral reflectance, chlorophyll content) could be measured at lesser points within the across the test site.
The research work was conducted within the project financed by the National Centre for Research and Development under Contract No. 2016/23/B/ST10/03155, titled "Modeling of carbon balance at wetlands applying the newest ESA satellite missions Sentinel-1/2/3".
The wetlands in the Prairie Pothole Region (PPR) are of critical importance as habitat and breeding grounds for the North American waterfowl population. Like many wetlands, they are threatened by climate change and intensifying agriculture. Monitoring these wetlands is therefore an important source of information for landscape management. Pothole wetlands range in size from a few square metres to several square kilometres. Larger wetlands covered by open water surfaces can be monitored using optical or radar satellite imagery. Smaller wetlands (< ca. 1 ha) are more challenging to delineate due to the moderate spatial resolution of most satellite sensors (typically in the range of a few tens of metres) and due to vegetation frequently emerging from the water surface of shallow water bodies. However, these small wetlands have been shown to be of high importance as habitats as well as linkages between larger wetlands, thus contributing to hydrological and biological connectivity. Radar imagery has been used for detecting water underneath vegetation based on double-bounce scattering leading to high radar returns, however, this effect is highly dependent on factors, such as wavelength, polarisation, incidence angle and vegetation density and height relative to the water surface. Hence, information gathered in situ is often required to constrain retrieval models.
In this study, Sentinel-1 dual-polarised synthetic aperture radar (SAR) time series acquired between 2015 and 2021 are used in combination with water level measurements from a number of permanent and temporary wetlands in North Dakota. The study period covers hydrometeorological conditions ranging from drought to flooding. A Bayesian framework is applied to integrate high-resolution topographic data to constrain water delineation in areas with low contrast. Dual-polarised SAR backscatter from open and vegetated wetlands is compared with in-situ water level measurements.
The results for open water bodies show that small and large wetlands differ in seasonality as well as in their response to wet and dry years. While large water bodies are mostly stable throughout the year, many small water bodies fall dry during the summer months, when evaporation exceeds moisture supply. During wet periods, prairie hydrological processes, such as merging between neighbouring wetlands, can be observed. The effects of drought years, such as the exceptionally dry year 2021, are visible across wetland size classes, however, larger wetlands (> ca. 8 ha) tend to be more stable than smaller ones. First results of the comparison between backscatter and water level generally show an increase of co-polarised (VV) backscatter in temporary wetlands with falling water levels, whereas the cross-polarised (VH) signal tends to be more stable. This is in line with our expectations as double-bounce scattering mainly affects the co-polarised radar signal. The results demonstrate the potential of dual-polarised Sentinel-1 image time series for high-resolution monitoring of prairie wetlands. Limitations of this study are related to wind inhibiting correct open water extent retrieval and due to the rather long acquisition interval of 12 days over the PPR, which is a result of the observation strategy of Sentinel-1.
Wetlands and inundated areas only cover a few percent of the Earth surface. However, they play an important role in climate variability. In particular, an important fraction of atmospheric methane is emitted in these areas [1]. CH4 emission in wet, saturated and inundated areas is an important issue to better understand methane atmospheric concentration intra-year variability and variations over the past decades. Therefore, there is a need to produce data records that can reliably capture variability linked to climate variations [2].
The goal of this work is to model methane emissions from wetlands and inundated areas on a simple, data-driven way. The calculations is dynamic (monthly) and global (we target 0.25° resolution). In order to expect temporaly and spatially realistic global CH4 emission, the continuous global input parameters (soil carbon content, soil temperature, water extent or water table, etc.) are as much as possible derived from measured or satellite-derived datasets, and not from climatic model outputs. Preliminary work focuses on the choice of parameters and datasets that will be used in the scheme in order to select the most relevant and up-to-date ones. For water extend, a complete database is used, that contains all wetlands and inundated areas types: wetlands (incl. peatlands), open-water extents, and rice paddies. This database mainly relies on GIEMS-2 developed by Prigent et al (2020) [3] that is being extended until 2020.
To calculate CH4 emission fluxes, a simple scheme is used, similar to the one of Gedney et al. 2004 [4]:
F_CH4=k_CH4 f_w C_s Q_10 (T_soil )^((T_soil-T_0)/10)
With F_CH4 the methane emission flux from wetlands, k_CH4 a global constant, f_w the wetland fraction, C_s the soil organic carbon, Q_10 a temperature sensitivity factor, T_soil the soil temperature in K, and T_0 a constant equals to 273,15K. This shape of scheme is similar to the ones that can be found for the methane production equation in climate models such as CTESSEL from ECMWF, ORCHIDEE-WET [5], or JULES [6]. Transport throught the water (ebullition, diffusion and plant mediated transport) could be added to complete this simple scheme.
The global distribution and inter-annual variability of resulting emissions will be presented, and discussed in the context of other existing methane estimates, such as in situ flux databases (e.g. FLUXNET-CH4 [7], BAWLD-CH4 [8]), or from current inventories of greenhouse gases [9].
[1] Saunois et al.: The Global Methane Budget 2000–2017, Earth Syst. Sci. Data, 12, 1561–1623, https://doi.org/10.5194/essd-12-1561-2020, 2020
[2] Kirschke, S., Bousquet, P., Ciais, P., Saunois, M., Canadell, J. G., Dlugokencky, E. J., et al., Three decades of global methane sources and sinks. Nature Geoscience, 6(10), 813–823. https://doi.org/10.1038/ngeo1955, 2013.
[3] Prigent, C., Jimenez, C., & Bousquet, P., Satellite-derived global surface water extent and dynamics over the last 25 years (GIEMS-2). Journal of Geophysical Research: Atmospheres, 125, e2019JD030711. https://doi.org/10. 1029/2019JD030711, 2020.
[4] Gedney, N., P. M. Cox, and C. Huntingford, Climate feedback from wetland methane emissions, Geophys. Res. Lett., 31, L20503, doi:10.1029/ 2004GL020919, 2004.
[5] Ringeval, B., de Noblet‐Ducoudré, N., Ciais, P. Bousquet, P., Prigent, C., Papa, F., B. Rossow, W. An attempt to quantify the impact of changes in wetland extenton methane emissions on the seasonal and interannual time scales, Global Biogeochemical Cycles, vol 24., doi:10.1029/2008GB003354, 2010.
[6] Clark, D. B., Mercado, L. M., Sitch, S., Jones, C. D., Gedney, N., Best, M. J., Pryor, M., Rooney, G. G., Essery, R. L. H., Blyth, E., Boucher, O., Harding, R. J., Huntingford, C., and Cox, P. M.: The Joint UK Land Environment Simulator (JULES), model description – Part 2: Carbon fluxes and vegetation dynamics, Geosci. Model Dev., 4, 701–722, https://doi.org/10.5194/gmd-4-701-2011, 2011.
[7]Delwiche et al., FLUXNET-CH4: A global, multi-ecosystem dataset and analysis of methane seasonality from freshwater wetlands. Earth System Science Data Discuss https://doi.org/10.5194/essd-13-3607-2021, 2021.
[8] Kuhn, M. A., Varner, R. K., Bastviken, D. J., Crill, P., MacIntyre, S., Turetsky, M. R., Walter, K., Anthony, McGuire, A. D., Olefeldt, D., BAWLD-CH4: Methane fluxes from Boreal and Arctic Ecosystems. Arctic Data Center. https://doi.org/10.18739/A2DN3ZX1R, 2021.
[9] Crippa, M., Guizzardi, D., Muntean, M., Schaaf, E., Dentener, F., van Aardenne, J. A., Monni, S., Doering, U., Olivier, J. G. J., Pagliari, V., and Janssens-Maenhout, G.: Gridded emissions of air pollutants for the period 1970–2012 within EDGAR v4.3.2, Earth Syst. Sci. Data, 10, 1987-2013, https://doi.org/10.5194/essd-10-1987-2018, 2018.
Water hyacinth (Pontederia crassipes) is one of the most invasive aquatic weed in the world and affects the aquatic life significantly and causes loss of biodiversity. Most of the affected countries are found to be in tropical and sub-tropical countries such as India. The environmental concerns caused due to invasion of exotic species across the globe led to intense research on this and similar alien species of plants and weeds [1]. In India, a Government initiative National Wetland Conservation Programme (NWCP) placed the problem of WH among the most serious threat to the National Lake, Conservation Plan. Water bodies such as the Katraj lake in Pune, the Pichhola Lake in Udaipur and Ulsooru lake in Bengaluru, Patancheru Lake in Hyderabad are facing serious WH invasion. In urban India, water hyacinth removal and remediation results in huge expenditure of public money and hence different methods were explored from time to time [2, 3].
The researchers approached different method to estimate WH presence and its growth pattern. The most common method was using ground data survey. Although such an approach is accurate to some extent because it is periodic over long time, it requires involvement of huge man power and government resources, has limited coverage and cannot provide continuous monitoring and analysis [4,5]. Alternative solutions include remote sensing methods [3], e.g., use of satellite imagery (multi-spectral or SAR) which are not always useful to detect and monitor WH growth on small water bodies due to lower resolution. In contrast, multi-spectral drone imagery provides very high-resolution data, beneficial to detect WH presence and its growth leading to better control measure. However, drones are more vulnerable to weather conditions when contrasted to satellite imagery. For example, if the climatic conditions are unfavourable, the drone will not manoeuvre appropriately or gather reliable data or imagery. There are concerns or troubles in drones that affects user’s navigation routine. A local government unit sometimes restricts the use of drones when there is an on-going military conflict. Drones are also blocked from entering a restricted zone such as military facilities that are conducting active military training inside their camps. By all these concerns, the satellite imagery is still attractive for environmental applications. This paper proposes the use of an emerging transfer learning from very high resolution multi-spectral drone to lower resolution satellite data. One possible solution to overcome the challenges of drone usage is to collect satellite data close in time with drone data and apply transfer learning between drone and satellite domains. A standard transfer learning approach from one sensor to another is applied to generate large optical drone data from satellite SAR data. When the multispectral drone data are not available, the generated drone data can be alternative data that aid in water hyacinth growth monitoring.
In this work we selected Patancheru Lake in Hyderabad, India as our study area that has notable WH presence largely due to pollution caused by waste water from nearby industrial hub. Patancheru Lake, lat: 17°31’19.42”N; long: 78°15’50.39”E is a peri-urban water body located about 31 km from the city center of Hyderabad on the national highway (NH)-65. The lake receives anthropogenic pollution from the peri-urban settlements located in its catchment. Very high-resolution multispectral drone and spot-light single-channel ICEYE radar data were collected every month starting from January 2021. Multi-spectral data was collected by flying an unmanned aerial vehicle (UAV), a quad-copter drone (Model V, CBAI Technologies, Hyderabad, India), equipped with a MicaSense RedEdge multi-spectral camera. The spectral data was collected at an altitude of 80 meters at a speed of 6.5 m/s and a ground sampling distance of 5.56 cm at a capture rate of 1 capture per second for all bands having a 12-bit RAW file. Spectral bands include Blue (475 nm), Green (560 nm), Red (668 nm), Red edge (717 nm) and near-Infra-Red (842 nm).
Optical drone multispectral remote sensing data offers a very high spatial resolution up to 5 cm with more detailed patterns for water hyacinth growth. ICEYE synthetic aperture radar (SAR) satellite image offers with a high resolution up to 25 cm. However, due to the imaging mechanism of SAR and the speckle noise, untrained people are difficult to recognize the water hyacinth growth patterns visually from SAR images. Inspired by the image-to-image translation performance of Generative Adversarial Networks (GANs) [6], a transfer learning approach from ICEYE to drone data is applied to generate large optical drone. At the moment, we are testing the image-to image translation between multispectral drone and ICEYE satellite data and this will be tested between drone freely available medium-resolution satellite SAR data from Sentinel-1 data. The main steps of SAR-to-optical image translation are as follows. First, the 5cm resolution of drone multispectral drone is upscaled to the 50cm resolution of ICEYE. We also transform the ground truth labels at 5cm to get a drone upscaled dataset. Then, the large drone and satellite SAR images are split to small patches. In the third step, Cycle GAN [7] is used to translate the SAR patches to optical image patches. The translation should convert single-channel ICEYE SAR image to multispectral drone image. The main purpose of selecting Cycle GAN among several deep learning techniques is that it preserves the structure information. Following this approach, one can use the drone upscaled dataset to train a water hyacinth detection model that could be directly used on ICEYE images. Finally, the optical image patches are stitched to generate the large optical image. The experiments are in progress, and the results will be presented in the conference and will be reported in the full paper.
References
[1] C. S. Elton, The ecology of invasions by animals and plants. Springer Nature, 2020.
[2] J. H. Bock, “Productivity of the water hyacinth eichhor- nia crassipes (mart.) solms,” Ecology, vol. 50, no. 3, pp. 460–464, 1969.
[3] A. Datta, S. Maharaj, G. N. Prabhu, D. Bhowmik, A. Marino, V. Akbari, S. Rupavatharam, J. A. R. Su- jeetha, G. G. Anantrao, V. K. Poduvattil et al., “Monitor- ing the spread of water hyacinth (pontederia crassipes): challenges and future developments,” Frontiers in Ecol- ogy and Evolution, vol. 9, 2021.
[4] A. Villamagna and B. Murphy, “Ecological and socio- economic impacts of invasive water hyacinth (eichhor- nia crassipes): a review,” Freshwater biology, vol. 55, no. 2, pp. 282–298, 2010.
[5] K. Kipng’eno, “Monitoring the spread of water hyacinth using satellite imagery a case study of lake victoria,” Ph.D. dissertation, University of Nairobi, 2019.
[6] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. engio, Generative adversarial networks,’’ in Proc. Adv. Neural Inf. Process. Syst., vol. 3, 2014, pp. 2672–2680.
[7]. J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, ‘‘Unpaired image-to-image translation using cycle-consistent adversarial networks,’’ in Proc. IEEE Int. Conf. Comput. Vis., Oct. 2017, pp. 2242–2251.
Products relevant to wetland monitoring from the ESA Scout-2 HydroGNSS mission:
Scout missions are a new Element in ESA’s FutureEO Programme, demonstrating science from small satellites. The aim is to tap into the New Space approach, targeting three years from kick off to launch, and within a budget of €30m, including launch and commissioning of space and ground segments. The Scout missions are non-commercial and scientific in nature; data will be made available freely using a data service delivery approach. HydroGNSS has been selected as the second ESA Scout Earth Observation mission, primed by Surrey Satellite Technology Ltd, with support from a team of scientific institutions. Its implementation has kicked off in Q4 2021, and the launch is planned by H2 2024.
The microsatellite uses established and new GNSS-Reflectometry (GNSS-R) techniques to take four land-based hydrological climate variables; soil moisture, freeze/thaw, inundation and biomass. The initial project is for a single satellite in a near-polar sun synchronous orbit at 550 km altitude that will approach global coverage monthly, but an option to add a second satellite has been proposed that would halve the time to cover the globe, and eventually a future constellation could be affordably deployed to achieve daily revisits.
GNSS-R is a relatively novel technique that uses the navigation signals transmitted by the Global Navigation Satellite System (GNSS) for remote sensing purposes: after these signals bounce off the Earth surface, they are collected by dedicated GNSS-R receivers and analysed to extract the geophysical information of the reflected ‘echo’. Earlier GNSS-R spaceborne missions such as UK TDS-1 and NASA CyGNSS have provided quality data that prove the sensitivity of these L-band (~19 cm wavelength) reflected signals to surface inundation [e.g., 1], even when the flooded areas lie under thick vegetation canopies [e.g., 2]. The forward scattering geometry particular of GNSS-R is especially sensitive to the highly reflecting and smooth surfaces in inundated terrains, and it complements other techniques at higher frequencies and/or other geometries, such the back-scattering geometry characteristic of other radar-based sensors (SARs, scatterometers, radar altimeters). The GNSS reflected signals have also proven sensitivity to above ground biomass (AGB) as the attenuation induced by the vegetation modulates the measured reflectivity [e.g., 3].
Furthermore, HydroGNSS will operate, for the first time, a receiver channel to spit very high sampling rate complex (phase and amplitude) data that corresponds to the reflected electromagnetic phasor. This is called ‘coherent channel’.
Two of the level-2 baseline products to be provided by HydroGNSS are relevant to monitoring wetlands: firstly the surface inundation product will be capable to identify flooded areas even under thick vegetation canopies, and with the increased resolution provided by the coherent channel. Secondly, the Forest AGB could be a parameter of interest for monitoring low latitude and forested wetlands. Beyond the baseline products, the coherent channel opens the possibility of investigating other demonstration products, such as precise altimetry across some flooded areas, using the range information embedded in the electromagnetic field of the reflected signals. As it will be described in this presentation, the HydroGNSS mission will enable the study of the potential application of these combined parameters for wetland monitoring.
[1] Nghiem et al., 2016, doi:10.1002/2016EA000194
[2] Rodriguez-Alvarez et al., 2019, doi:10.3390/rs11091053
[3] Santi et al., 2020, doi:10.1109/JSTARS.2020.2982993
Wetlands provide a range of benefits to humankind on different scales, from carbon sequestration and biodiversity conservation to water and food provision. Wetlands cover roughly 7% of the African continent and due to their fertile soils and higher water availability, they are being increasingly developed for agricultural use to counteract dependency on global food markets and reduce hunger and poverty. Yet, agricultural wetland development is among the main drivers of wetland degradation in Africa. For the sustainable use of wetlands, decision-makers and managing institutions need quantified information on wetland ecosystems to provide the necessary knowledge basis for their management, which is particularly incomplete for the African continent. Remote sensing provides a significant potential for wetland mapping, inventorying and monitoring. Previous work employing Earth observation for wetland management in the context of agricultural development often revolved around specific land uses and usually require ancillary data on the crops, which can be difficult to obtain, in particular where small-scale farmers’ fields are fragmented, used inconsistently and where intercropping is practiced. In contrast, the Wetland Use Intensity (WUI) indicator is not specific to a particular crop and requires little ancillary data. The indicator is based on the Mean Absolute Spectral Dynamics (MASD), which is a cumulative measure of reflectance change across a time series of satellite images. The WUI depicts the intensity of wetland management practices like harvesting, intercropping, fertilizer application, flooding, or burning that are reflected by the land cover.
We therefore implemented an automated approach for WUI calculation and developed a method of quantitatively comparing WUI values to a wetland ecosystem integrity scoring system. We leveraged cloud-computing technology through the Google Earth Engine (GEE) platform by using a Sentinel-2 surface reflectance image collection and by adapting the S2cloudless algorithm to the GEE JavaScript API. We established a regular time series over a pseudo-year 2020 with bi-monthly median mosaics from July 2019 to June 2021 as the basis of MASD calculation. In order to compare it to a meaningful ground reference, we selected two datasets of 250x250 m field plots across Rwanda assessed according to the WET-Health approach in 2013 within the GlobE project* and in 2018 within the DeMo-Wetlands** project. WET-Health is an approach for rapid wetland condition assessment, which accounts for wetland complexity by using a scoring system for wetland hydrology, geomorphology and vegetation. The datasets were tested for spectral and geometric comparability to the observation period. A surface water dynamics layer was derived from individual flood layers based on thresholding of Sentinel-1 imagery, and the resulting flooding regime assigned to each plot. We then evaluated the correlation between WUI values and WET-Health scores, taking into account the flooding regime.
The results suggest that the adapted WUI indicator is informative and applicable for wetland management. The possibility to measure use intensity as a proxy for ecosystem condition is useful to stakeholders in wetland management, both from the agriculture and from the conservation sides.
* Funded by the German Federal Ministry of Education and Research, FKZ 031A250 A-H
** Funded by the German Federal Ministry for Economic Affairs and Energy, FKZ 50EE1537
Wetlands are biodiversity hotspots that offer several ecosystem services necessary for human well-being. While playing a key role in global climate regulation, wetlands are sensitive to anthropogenic disturbances and climate change and, therefore, are currently endangered. This raises the need for accurate and up-to-date information on the spatial and temporal variability of wetlands and the climate and anthropogenic pressures that these ecosystems are facing.
Located in the central portion of South America, the Pantanal biome, distributed over three countries (Brazil, Bolivia, and Paraguay), is the largest tropical wetland in the world. With more than 84% of its territory preserved, the Brazilian portion of the Pantanal biome is also the wetland with the largest area of natural vegetation in the world. It is a seasonally flooded wetland composed of several interconnected ecosystems shaped by natural and anthropogenic factors. Since 2019, this region has been suffering a prolonged drought that was exacerbated in 2020. This drought was caused by the reduced transport of warm and humid Summer air from Amazonia into the Pantanal that led to the lowest rainfall during the Summer of 2019 and 2020, considering the period between 1982 and 2020. Severe and prolonged drought events are becoming more frequent in the Pantanal. Such a scenario favored the occurrence of natural disasters in the Pantanal and led to the 2020 Pantanal fire crisis. During this crisis, remote sensing-based burned area estimates showed that up to one-third of the Pantanal was burned. However, the extension of burned area during this crisis varied widely depending on the burned area product. While global burned area products have been successfully generated at coarser spatial resolution (250-500 meters) using time series of MODIS sensor observations (e.g., MCD64A1 collection 6.0 and Fire_cci version 5.1 products), the use of medium spatial resolution images, especially Landsat-derived images (30 meters), to automatically map burned area (e.g., MapBiomas Fire collection 1.0 and GABAM products) remains challenging due to the smaller number of observations available (up to two per month, or 16 days acquisition frequency). Therefore, MapBiomas Fire and GABAM burned area products usually underestimate burned area when compared to MODIS-based products. This underestimate of burned area derived from Landsat-based products raises the possibility of using the medium spatial resolution images derived from Copernicus Sentinel-2 missions to develop a more accurate burned area product, by combining a higher spatial resolution when compared to MODIS-based products. They also have a higher number of observations when compared to Landsat-based products (up to six per month when using Sentinel-2A and 2B).
In this context, the objective of this paper was to assess the use of Sentinel-2 images to map burned areas occurred in the Brazilian portion of the Pantanal biome in 2020. For this purpose, we applied the Linear Spectral Mixing Model (LSMM) to Sentinel-2 MultiSpectral Instrument (MSI) sensor images to generate the vegetation, soil, and shade fraction images. Given the fact that shade fraction image characteristics enhance the burned areas, we can use it as a burned index. We obtained the Sentinel-2 monthly composites for 2020 from the Google Earth Engine (GEE) platform. For the analysis, we built the composites corresponding to the endmembers with the highest fraction values during the month for the year 2020 for the study area, where the greatest values of shade highlight the areas occupied by water bodies and burned areas during the month. We were able to automatically map the burned areas in the study area, especially during the dry season of the Pantanal biome (April to August). Our results show an estimate of 53,510 km2 burned in the Brazilian portion of the Pantanal during the year 2020, which severely affected the flora and fauna causing biodiversity loss in this biome. The burned area estimate derived from Sentinel-2 images for 2020 in the Brazilian Pantanal biome was higher than those derived from MODIS-based and Landsat-based burned area products. While MCD64A1 estimated 35,837 km2 burned in the Pantanal in 2020, MapBiomas Fire and GABAM estimated, respectively, 23,372 km2 and 14,307 km2 burned. MCD64A1 estimate was 33% lower than our results, while MapBiomas and GABAM estimates were, respectively, 56% and 73% lower than ours. Fire_cci product is still not available for 2020. However, it is expected an estimate close to the one of MCD64A1 (35,837 km2), since the annual average burned area estimated by Fire_cci in the Pantanal between 2002-2019 (8,642 km2) was less than 1% higher than the one estimated by MCD64A1. We can conclude that the proposed approach based on Sentinel-2 images presents advantages when compared to the current burned area products made available for the Pantanal, and, therefore, can potentially refine the burned areas estimation on a regional scale. It can also be used as a reference for calibrating global burned area products. This calibration is of outmost importance because MODIS and Landsat images are made available for a longer time series (since 2000 and 1985, respectively) than Sentinel (2015), therefore, more accurate long-term spatial and temporal patterns of burned area can be obtained. Our results have potential to improve the estimate of trace gases and aerosols associated with biomass burning, where global biomass burning inventories are widely known for having biases on a regional scale.
Recent mangrove preservation efforts have set ambitious targets to conserve 30% of the world’s mangroves within the next decade. However, these efforts often lack the monitoring capacity to identify the success or failure of protected areas in real-time, creating a gap between initial targets and capacity to enforce them at the local scale. Here, we present the first global real-time monitoring platform for protected mangrove regions. We map past and current threats to mangroves within PAs at the 30-m resolution, enabling local to national decision-makers to both identify hotspot areas for mobilization and policy change to prevent loss. In documenting loss and threats across all mangrove protected areas globally, we create a new standard of transparency and accountability as we move towards tracking progress on national-to-global conservation goals. We provide real-time knowledge on the state of conservation efforts through publicly accessible and understandable tools, bridging the gap between past studies of mangrove loss drivers and actionable decision-making capacity on the ground. Broadly, we aim to prevent a system of “paper parks” in mangrove PAs, in which a region is protected by law but the enforcement and monitoring tools to ensure its success are lacking.
Through our remote sensing-based analysis of mangrove loss drivers within global protected areas, we find that conservation efforts have been largely successful in preventing human-driven loss over the last two decades. While approximately 60% of global mangrove losses from 2000-2016 resulted from anthropogenic threats such as conversions to aquaculture and agriculture, settlement, and clear-cutting, only 25% of losses within protected regions resulted from these drivers. Worldwide, three times as many PAs experienced natural loss than human-driven loss. Protected areas across Southeast Asia comprise the vast majority of these anthropogenic losses, with conversions to commodities comprising over 90% of anthropogenic PA losses throughout the region. While future conservation efforts must focus on finding and mitigating these exact hotspots of loss, we suggest that plans must primarily consider rehabilitation aimed towards mitigating damage from climatic stressors such as erosion and extreme weather events. Our global mangrove PA monitoring platform enables decision-makers to quickly identify these hotspots of human-driven loss, as well as quantify the PAs most vulnerable to future damage from these climatic threats.
Here, we present a model of easily transitioning quantitative analysis for SDG 6 (Clean Water and Sanitation) and SDG 15 (Life on Land) towards active decision-making tools to improve coastal conservation outcomes. Our PA monitoring platform enables users to efficiently gain a general understanding of the overall success of their PAs throughout the course of PA implementation. These tools enable scientific results on each SDG 6 and 15 indicator to be transferred into on-the-ground plans for targeting certain hotspot regions over others. Future efforts may also seek to integrate human wellbeing-oriented SDGs such as SDG 1 (Zero Poverty), SDG 8 (Decent Work and Economic Growth), and 11 (Sustainable Cities and Communities) into the PA effectiveness measures, providing a more holistic tool for policymakers to balance both human and natural needs in conservation planning efforts. Ultimately, we seek to pioneer new strategies for transitioning remote sensing insights into scalable platforms to ensure high levels of communication and transparency across all scales in conservation management.
Land cover change detection is challenging, as it can be caused by a large variety of processes, such as urbanisation, forest regrowth or land abandonment. It may also be confounded with spurious change, such as interannual variability due to droughts or fires (Gómez et al. 2016). There is also a mismatch between land cover and the reflectance that is captured by optical satellite sensors. Algorithms for land cover change detection often overestimate change because local disturbances that do not constitute a permanent land cover change are also captured by the algorithms.
Even more challenging is detecting gradual land cover change. This is possible using land cover fraction or probability maps, which estimate the proportion or likelihood of each land cover class per pixel and can therefore track both abrupt and gradual changes over time. The challenge comes from the uncertainty of these estimates, which are usually obtained from a regressor, as they can differ substantially between years and seasons. This often results in an overestimation of land cover change.
A potential solution to this problem is to use a change detection algorithm that uses times series to produce long-term trend information, such as BFAST Lite (Masiliūnas et al. 2021a). However, these algorithms traditionally use a vegetation index as an input, which limits the algorithms to detecting change between vegetated and non-vegetated land cover classes. To tackle this limitation, we propose a combination of a change detection algorithm with a land cover fraction time series, used as an input. Ideally this time series is dense, to make use of the algorithm's capability of modelling seasonal changes and to tolerate some noise from fraction uncertainty in the time series. The output model is then capable of capturing both the gradual change through tracking trends of each land cover class, as well as abrupt change when there is a sudden increase or decrease in a given land cover class fraction.
In this study, we implement such a workflow by using the full archive of Landsat 8 Surface Reflectance as an input to a Random Forest regression model, which predicts land cover fractions for every land cover class (Masiliūnas et al. 2021b) for every time step (every 16 days). The resulting land cover fraction time series is then used as an input into the BFAST Lite algorithm. If there is a significant jump in the time series of land cover fractions, the algorithm detects it and separates the time series into multiple segments. If there is no significant jump in a segment, the fitted model smooths out the observations to minimise the effect of noise and interannual variability. The result is a consistent, dense time series of each land cover fraction. Finally, the fractions are normalised so that they all sum to 100%.
The model output is validated using a land cover change dataset consisting of over 60,000 100x100 m areas with annotated land cover fractions that was collected for the creation of the Copernicus Global Land Services Land Cover 100 m product (Tsendbazar et al. 2020). The proposed approach is compared to the traditional way of using the change detection algorithms using a vegetation index as an input, as well as to using a regressor output directly, without a change detection algorithm.
The proposed approach leads to the creation of a set of global land cover fraction maps that would be updated every 16 days and would be internally consistent, with less overestimation of land cover change, and with smooth transitions at times of little change and sharp transitions in times of abrupt change. Such maps would be very valuable for climate change modelling, forest disturbance and degradation tracking, tracking the effect of disasters such as typhoons and forest fires over time, and would help with national land cover management efforts globally.
References:
Gómez, C., White, J. C., & Wulder, M. A. (2016). Optical remotely sensed time series data for land cover classification: A review. ISPRS Journal of Photogrammetry and Remote Sensing, 116, 55–72. https://doi.org/10.1016/j.isprsjprs.2016.03.008
Masiliūnas, D., Tsendbazar, N.-E., Herold, M., & Verbesselt, J. (2021a). BFAST Lite: A Lightweight Break Detection Method for Time Series Analysis. Remote Sensing, 13(16), 3308. https://doi.org/10.3390/rs13163308
Masiliūnas, D., Tsendbazar, N.-E., Herold, M., Lesiv, M., Buchhorn, M., & Verbesselt, J. (2021b). Global land characterisation using land cover fractions at 100 m resolution. Remote Sensing of Environment, 259, 112409. https://doi.org/10.1016/j.rse.2021.112409
Tsendbazar, N.-E., Tarko, A., Li, L., Herold, M., Lesiv, M., Fritz, S., & Maus, V. (2020). Copernicus Global Land Service: Land Cover 100m: version 3 Globe 2015-2019: Validation Report. Zenodo. https://doi.org/10.5281/zenodo.3938974
The Species Distribution Model (SDM) typically uses the land use land cover (LULC) variables with other predictor variables to project and map species distribution at the landscape level. Whereas LULC does not change significantly, cumulative change over time may substantially impact species interaction migration and distribution at a landscape scale. Hence, this study sought to explore the feasibility of remotely sensed data to map the change in the spatial distribution of the tomato leafminer, Tuta absoluta (Meyrick) (Lepidoptera: Gelechiidae), in Kenya's major tomato production counties. Firstly, the study classified a time series LULC from 2005 to 2020 with a 5-year interval in Kenya's major tomato production counties with Google Earth Engine (GEE) using a Random Forest algorithm with an overall accuracy greater than 0.9, and a Kappa of 0.92. The pattern of LULC as the percentage of the major part of the area studied was dominated by grass cover, covering about 50 % of the total studied area, followed by cropland (38%). Secondly, in the Maxent machine learning algorithm, the classified LULC map was combined with non-correlated 19 bioclimatic variables and T. absoluta occurrence data to map and classify (Very low (0-0.2), Low (0.2-0.4), Moderate (0.4-0.6), High (0.6-0.8), and Very high (0.8-1) suitability area of T. absoluta. Finally, the generated maps were subjected to simple statistical analysis to determine the trend in T. absoluta infestation classes. The results suggest an overall increase and decrease in classes of infestation. Specifically more than a 3% increase in area by classes from 2015 to 2020, with a loss of more than 4% during the same time period. The findings will improve the utilisation and application of remote sensing data in ecology to create accurate decision support maps to assist agricultural practitioners in targeting appropriate pest infestation areas for the deployment of control strategies.
SAR land cover mapping experience in the HR Landcover CCI+ ECV project
A. Sorriso, D. Marzi, P. Gamba
Because of their availability in any weather conditions and their ability to capture the geometric and water-related properties of the Earth surface, Synthetic Aperture Radar (SAR) time series are increasingly used for land cover mapping and for environmental monitoring. Specifically, the huge dataset provided by Sentinel-1 constellation is particularly useful for high resolution mapping in any place of the World. During the last years, a wide variety of SAR applications have benefited from the use of the large stacks of Sentinel-1 products, and processing and methods of analysis have increased more and more in the field of remote sensing. The aim of this work is to describe the final version of the processing chain designed, tested and implemented in an operational system for high resolution and global land cover mapping in the framework of the HR Land Cover CCI+ ECV project. The processing chain was used for two specific tasks:
a) the extraction of a so-called static map for the year 2019;
b) the extraction, whenever possible because of the availability of SAR data in the past, of additional historical land cover maps every five years from 2015 backwards.
For the first task, a time series of Sentinel-1 images was considered as input, while for the second task, data from the ASAR sensor on board of the ENVISAT satellite or by the ERS-1 and -2 satellites were considered, with a considerably more limited time series and geographical coverage, unfortunately.
The processing chain for the static map, exploiting the high resolution of Sentinel-1 data, followed the structure highlighted in the figure below. The processing chain for the historical map instead was based on a Random Forest (RF) classifier used to extract all the considered land cover classes.
Test results were obtained in three different areas, two in the Amazonian Forest and one in Siberia, and were validated with respect to ground truth points manually extracted by the project team.
The complete chain includes five steps:
1. SAR pre-processing, derived by the standard SNAP chain, to radiometrically and geometrically correct the SAR sequence, and to co-register the data sets that were not perfectly aligned.
2. Multitemporal despeckling, applied according to the approach described in [1] and applied separately to four temporal segments of the yearly sequence used as input to the chain. The rational for this choice is to reduce the overall set of data and make the procedure less computationally complex, retaining at the same time the possibility to exploit the temporal trajectory of land cover samples, which is particularly important for vegetation-related classes. As the output of this chain, the original sequence was reduced to four super-images (temporal mean), extracted as intermediate product of the despeckling method.
3. SAR feature extraction, which aims at adding spatial features to the already extracted temporal features. In this case, as mentioned in [2], simple statistical features corresponding to the neighborhood of each pixel have been considered.
4. Classification, which is in turn subdivided into three parts:
• an unsupervised water extraction routine as presented in [3], applying a K-means procedure to discriminate areas with low minimum and average backscatter intensities, and high temporal variance along the year from other potential areas of interest for the water class;
• an unsupervised urban extent extraction approach based on the extraction of a single super-image for the whole year to which the algorithm described in [4] is applied;
• a supervised classification implemented by means of a RF classifier, trained by samples manually extracted from the team on the basis of the previous existing and coarser land cover maps.
5. A merging module, aimed at composing the final land cover map by spatially combining the three maps extracted in the previous step.
The experimental tests were performed on three areas, each one of the size of a Sentinel-2 tile, in different regions of the Earth in order to check the performance of the approach with respect to very different land cover environments. The overall accuracy values obtained for the static maps are 62%, 68% and 54%, for Siberia and the two Amazonian tiles, respectively.
With respect to the historical map, the classification step has been performed with the RF classifier only, indeed, the water extraction and urban extent unsupervised techniques do not perform well where the number of images is low, which is increasingly the case moving backwards from 2015. For instance, for the Siberia test site the historical maps were computed using SAR only in the following years: 2015, 2010, 2005 and 1995. Moreover, in some of these years the coverage of the tile was not complete.
In conclusion, the described processing chain shows consistent performance for land cover classification maps on a global scale, but the classification accuracies are not very appealing. Still, the unsupervised urban and water detectors are instrumental to achieve better classification performances, since outliers and misclassification errors in these classes are strongly reduced with respect to the supervised classification chain.
These numbers confirm that SAR time series may contribute to classify specific classes that are not detected in an equally easy manner using multispectral data. For most of the other classes, instead, fusion of SAR and multispectral data is the key to achieve acceptable classification results.
The pipeline described in this work was chosen for the classification step due to its significant generalization ability coupled with easiness of implementation and reduced request for training samples with respect to more accurate but more complex and computationally demanding deep learning-based classifiers [5].
References
[1] W. Zhao, C.-A. Deledalle, L. Denis, H. Maıtre, J.-M. Nicolas, and F. Tupin, “Ratio-based multitemporal SAR images denoising: RABASAR”, IEEE Transactions on Geoscience and Remote Sensing, vol. 57, no. 6, pp. 3552–3565, Jun. 2019.
[2] A. Sorriso, D. Marzi, P. Gamba, A General Land Cover Classification Framework for Sentinel-1 SAR Data, Proc. of IEEE, the online Forum on Research and Technologies for Society and Industry Innovation for a smart world - IEEE RTSI 2021, September 6-9, 2021, Naples, unformatted CD-ROM.
[3] D. Marzi and P. Gamba, "Inland Water Body Mapping Using Multi-temporal Sentinel-1 SAR Data," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, doi: 10.1109/JSTARS.2021.3127748.
[4] G. Lisini, A. Salentinig, P. Du, P. Gamba, “SAR-based urban extents extraction: from ENVISAT to Sentinel-1”, IEEE J. of Selected Topics in Applied Earth Observation and Remote Sensing, vol. 11, no. 8, pp. 2683-2691, Aug. 2018.
[5] N. Yokoya, et al., “Open data for global multimodal land use classification: Outcome of the 2017 IEEE GRSS data fusion contest”, IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, pp. 1-15.
Savannas, characterized by the co-dominance of trees, shrubs, and grasses, cover approximately 20% of Earth's land surface. They are globally important ecosystems for biodiversity and the livelihoods of millions of people. The Greater Maasai Mara Ecosystem (GMME) in Kenya is an iconic savanna ecosystem of high importance as natural and cultural heritage, notably by including the largest remaining seasonal migration of African ungulates and the semi-nomadic pastoralist Maasai culture. Comprehensive mapping of vegetation distribution and dynamics in GMME is important for understanding ecosystem changes across time and space since recent reports suggest dramatic declines in wildlife populations alongside troubling reports of grassland conversion to cropland and habitat fragmentation due to increasing small-holder fencing. Here, we present the first comprehensive vegetation map of GMME at high (10-m) spatial resolution. The map consists of nine key vegetation cover types (VCTs), which were derived in a two-step process integrating data from high-resolution WorldView-3 images (1.2-m) and Sentinel-2 images (10-m) using a deep-learning workflow. We evaluate the role of anthropogenic, topographic, and climatic factors in affecting the fractional cover of the identified VCTs in 2017 and their MODIS-derived browning/greening rates in the preceding 17 years at 250-m resolution. Results show that most VCTs showed a preceding greening trend in the protected land. In contrast, the semi- and unprotected land showed a general preceding greening trend in the woody-dominated cover types, while they exhibited browning trends in grass-dominated cover types. These results suggest that woody vegetation densification may be happening across much of the GMME, alongside vegetation declines within the non-woody covers in the semi- and unprotected lands. Greening and potential woody densification in GMME is positively correlated with mean annual precipitation and negatively correlated with anthropogenic pressure. Increasing woody densification across the entire GMME in the future would replace high-quality grass cover and pose a risk to the maintenance of the region's rich savanna megafauna, thus pointing to a need for further investigation using alternative data sources. The increasing availability of high-resolution remote sensing and efficient approaches for vegetation mapping will play a crucial role in monitoring conservation effectiveness as well as ecosystem dynamics due to pressures such as climate change.
Owing to continued interests and needs in land cover monitoring, global land cover mapping (GLC) efforts have seen accelerated progress over the last three decades, since the first satellite-based GLC map was produced in 1994. Recent advances in satellite data acquisition and processing capabilities have led to the release of GLC maps at a higher resolution (10m) based on Sentinel data. These include the FROM-GLC10 map for 2017 based on Sentinel-2 imagery by Tsinghua University in China (Gong et al. 2019), the ESRI 2020 Land Cover map based on Sentinel-2 imagery (ESRI 2021), and the ESA WorldCover 2020 map based on Sentinel 1 and 2 imagery produced by European Space Agency (Zanaga et al. 2021). The Dynamic World product based on Sentinel 2 data is also expected to be released by Google in upcoming months.
However, the co-existence of multiple maps may create confusion for the map users in choosing a suitable GLC map for their application. Therefore, a comparative analysis of contemporary 10m resolution GLC maps will be useful to inform users about the differences between existing products and their strengths and weaknesses. Map validation at 10m resolution has its challenges due to possible geolocation mismatch between the map product and the validation dataset. Validation datasets are often created by visual interpretation of very high-resolution imagery that is not free of error in terms of geolocation. These errors can have an impact on the accuracy assessment and therefore, geolocation errors should be taken into consideration when validating GLC maps, particularly at high resolution.
This study presents comparative accuracy assessments of existing 10m resolution GLC maps. After addressing the differences in the land cover class descriptions between the classes, the 10m resolution maps are assessed using the Copernicus Global Land Service- Land Cover (CGLS-LC) validation data (Tsendbazar et al. 2021). This dataset is a multi-purpose dataset suitable for validating maps with 10 -100m resolution. The validation dataset consists of about 21000 locations (primary sample units-PSU) with each sample location containing 100 secondary sampling units (SSU). The PSUs correspond to 100x100m areas, while the SSUs correspond to 10x10m areas. The SSUs can be used to validate GLC maps at 10m resolution. To assess the potential effect of geolocation errors, 10m SSUs are investigated including their neighbouring SSUs in the validation data. Depending on the heterogeneity of neighbouring SSUs in terms of land cover classes, different scenarios are used to calculate the accuracy of a 10m resolution GLC map. The same approach is used to validate the existing 10m resolution GLC maps to allow comparison. The overall and class accuracies are calculated both at global and continental levels. The validation methodology and results obtained for the existing 10m resolution GLC maps are presented in this presentation.
With current developments in generating GLC maps in 10m resolution map, understanding the differences and strengths of existing maps is important for both the map users and producers. Furthermore, with the developments in map generation, challenges in validating high-resolution maps should also be addressed to support transparent and internationally accepted map quality assessments.
Index Terms—global land cover maps, map comparison, validation, and 10m resolution
References:
ESRI. (2021). Esri 10-Meter Land Cover.2021,July 5, https://livingatlas.arcgis.com/landcover/
Gong, P., Liu, H., Zhang, M., Li, C., Wang, J., Huang, H., Clinton, N., Ji, L., Li, W., Bai, Y., Chen, B., Xu, B., Zhu, Z., Yuan, C., Ping Suen, H., Guo, J., Xu, N., Li, W., Zhao, Y., Yang, J., Yu, C., Wang, X., Fu, H., Yu, L., Dronova, I., Hui, F., Cheng, X., Shi, X., Xiao, F., Liu, Q., & Song, L. (2019). Stable classification with limited sample: transferring a 30-m resolution sample set collected in 2015 to mapping 10-m resolution global land cover in 2017. Science Bulletin, 64, 370-373
Tsendbazar, N., Herold, M., Li, L., Tarko, A., de Bruin, S., Masiliunas, D., Lesiv, M., Fritz, S., Buchhorn, M., Smets, B., Van De Kerchove, R., & Duerauer, M. (2021). Towards operational validation of annual global land cover maps. Remote Sensing of Environment, 266, 112686
Zanaga, D., Van De Kerchove, R., De Keersmaecker, W., Souverijns, N., Brockmann, C., Quast, R., Wevers, J., Grosu, A., Paccini, A., Vergnaud, S., Cartus, O., Santoro, M., Fritz, S., Georgieva, I., Lesiv, M., Carter, S., Herold, M., Li, L., Tsendbazar, N.-E., & Arino, O. (2021). ESA WorldCover 10 m 2020 v100. Zenodo
Monitoring changes in Earth’s surface is important for understanding various processes on the Earth’s ecosystems and for implementing appropriate measures targetting challenges such as climate change and sustainable development. Accordingly, with the advancements in satellite-based land monitoring, many research has been done on land change monitoring from local to global scale and targetting particular land cover types, e.g., forest and water (Hansen et al. 2013; Pekel et al. 2016). Operational land cover monitoring efforts have also produced global land cover maps with regular updates allowing to monitor changes in land cover at a generic level. For example, the Copernicus Global Land Service (CGLS) Dynamic Land Cover project produced yearly global land cover maps from 2015 to 2019 at 100m resolution. Since map uncertainty-related inconsistencies may be regarded as change when comparing multitemporal land cover maps, monitoring changes in land cover at a generic level can be challenging due to spurious changes and also variation in land cover transitions throughout the world.
This study aims to improve land cover change monitoring at a global scale in recent years. To do so, we targeted the following: (I) to develop an advanced time series based algorithm (BFAST-Lite) suitable for large scale change monitoring (II) to improve land cover change detection by combining BFAST-Lite and machine learning algorithms and (III) to estimate the area of changes in global land cover in recent years.
We developed a new unsupervised time series change detection algorithm that is derived from the original BFAST (Breaks for Additive Season and Trend) algorithm (Masiliūnas et al. 2021). The focus of this new algorithm was on speed and flexibility to make it suitable for upscaling for global land cover change detection. The algorithm was tested on an eleven-year-long time series of MODIS imagery, using a global reference dataset with over 30,000 point locations of land cover change to validate the results. The global reference dataset was collected as part of the CGLS Dynamic Land Cover project.
Next, we combined BFAST-Lite with the random forest algorithm to improve land cover change detection at a glocal scale by combining unsupervised and supervised approaches for change detection (Xu et al. 2021). We further compared the performance of three satellite sensors: PROBA-V, Landsat 8 OLI, and Sentinel-2 MSI at global scale change monitoring using the global reference dataset for land cover change.
In addition, we aimed to statistically estimate the area of land cover change in recent years at a global scale. To do so, we used the CGLS-LC100 yearly maps 2015-2019 and the CGLS global validation dataset (Figure 1) (Tsendbazar et al. 2021) to estimate the area of land cover change at a global scale by accounting for the bias of mapped products with help of reference datasets.
The approach and obtained results of these studies related to global land cover change monitoring are presented to highlight the progress of land cover change monitoring as well as challenges that need further attention towards accurate monitoring of global land cover change.
Index Terms— land cover change, change monitoring, generic land cover, change area estimation
Hansen, M.C., Potapov, P.V., Moore, R., Hancher, M., Turubanova, S.A., Tyukavina, A., Thau, D., Stehman, S.V., Goetz, S.J., Loveland, T.R., Kommareddy, A., Egorov, A., Chini, L., Justice, C.O., & Townshend, J.R.G. (2013). High-Resolution Global Maps of 21st-Century Forest Cover Change. Science, 342, 850-853
Masiliūnas, D., Tsendbazar, N.-E., Herold, M., & Verbesselt, J. (2021). BFAST Lite: A Lightweight Break Detection Method for Time Series Analysis. Remote Sensing, 13
Pekel, J.-F., Cottam, A., Gorelick, N., & Belward, A.S. (2016). High-resolution mapping of global surface water and its long-term changes. Nature, 540, 418-422
Tsendbazar, N., Herold, M., Li, L., Tarko, A., de Bruin, S., Masiliunas, D., Lesiv, M., Fritz, S., Buchhorn, M., Smets, B., Van De Kerchove, R., & Duerauer, M. (2021). Towards operational validation of annual global land cover maps. Remote Sensing of Environment, 266, 112686
Xu, L., Herold, M., Tsendbazar, N.-E., Masiliūnas, D., Li, L., Lesiv, M., Fritz, S., & Verbesselt, J. (2021). Time series analysis for global land cover change monitoring: a comparison across sensors. Remote Sensing of Environment, (under review)
The ESA-CCI High Resolution (HR) Land Cover(LC) project developed LC maps at HR (10 m) every five years between 1990 and 2019 and LC yearly change (LCC) over three regions for which climate - LC interactions are known to be significant: Amazonia, Siberia and Sahel. These maps have been used in the ORCHIDEE land surface model to map the fifteen Plant Functional Types (PFTs) that are used to describe the land cover variability within a model grid cell. For that purpose, the fifteen HRLC classes have been interpreted in terms of the ORCHIDEE PFTs, using auxiliary information as climate ecozones provided by Köppen-Geiger classification (Kottek et al., 2006), C3/C4 grasses and crops partitioning following Still et al., 2014 for grasslands and the Land Use Harmonization database (LUH2v2h, Hurtt et al., 2020) for crops. Then, yearly PFTs maps were generated over the studied regions and compared to our previous PFT maps based on the Medium Resolution LC product (ESA. Land Cover CCI Product User Guide Version 2. Tech. Rep. (2017) and Lamarche et al., 2017) provided at 300 m resolution and on a yearly basis between 1992 and 2020. The results show significant differences related to the partition of evergreen/deciduous tree and shrub species and to the fractions of grasses and crops and bare soil.
Besides, the HR information allowed us to revise the albedo parameterization in ORCHIDEE. As a matter of fact, some deficiencies were identified related to the soil background values which were not correctly optimized in the cases of pixels densely vegetated all year long. In such cases, the satellite observations used for the calibration are not influenced by the underlying soil and the optimization of the soil albedo fails. Moreover, the model is not able to reproduce the albedo changes linked to land cover changes such as deforestation events or grass/crop transitions. Therefore, a new calibration methodology has been developed to improve the albedo parameters calibration, better constrain the parameter space and allow the regionalization of the parameters (Bastrikov et al., in preparation). The improvements brought by these changes will be presented.
Thanks to this new parameterization, land cover changes and their impacts on the climate as well as climate change impacts on the vegetation have been studied through a set of forced and coupled model simulations. Thanks to the IPSL atmospheric model (LMDZ) zoomed and wind-nudging capacities, high resolution global simulations over our three studied regions have been performed. Three configurations of the LMDZ model were developed: a zoom factor of 5 was chosen to increase the model grid resolution by a factor of 5 in the center of the studied regions and wind atmospheric fields from ERA5 atmospheric reanalysis were acquired to nudge the atmosphere dynamics to the observed one (Cheruy et al., 2013). In this configuration, the model grid resolution inside the zoom is decreased up to a few tens of kilometers and short term simulations are sufficient to study the surface-atmosphere feedback.
Various simulations were performed over each region to study the impacts of LCC on the atmosphere over the time period 1990 - 2015. Different scenarios have been studied: static LC maps for 1990 and 2019 years, and yearly updated ones. Different configurations of ORCHIDEE coupled with LMDZ and in standalone mode (forced by atmospheric reanalysis) were also performed to assess the atmospheric feedback. The comparison of them allowed to highlight the impacts of LCC on the atmospheric temperatures and precipitation and the role of the atmosphere on the magnitude of the impacts. For example, preliminary results are showing that the land cover changes may have different impact in coupled compared to forced mode. An albedo decrease linked to afforestation for example, will result into larger sensible and latent heat fluxes, lower soil moisture in forced mode, whereas in coupled mode, the increased latent heat fluxes may translate into more precipitation, larger soil moisture and LAI, leading to even lower albedo values and larger surface and air temperature changes compared to the forced simulations. Other interesting features are under analysis and will be presented at the symposium. The benefits/drawbacks of the HRLC product compared to medium resolution one will be finally discussed.
References:
Bastrikov, V., San Martin, R., C. Ottlé and P. Peylin, Calibration of albedo parametrisations in ORCHIDEE based on various satellite products, in preparation for Geosci. Model Dev.
Cheruy, F., Dupont, J. C., Campoy, A., Ducharne, A., Hourdin, F., Haeffelin, M., & Chiriaco, M. (2013). Combined influence of atmospheric physics and soil hydrology on the realism of the LMDz model compared to SIRTA measurements. Clim. Dynam, 40, 2251-2269.
Hurtt, G. C., Chini, L., Sahajpal, R., Frolking, S., Bodirsky, B. L., Calvin, K., ... & Zhang, X. (2020). Harmonization of global land use change and management for the period 850–2100 (LUH2) for CMIP6. Geoscientific Model Development, 13(11), 5425-5464.
Kottek, M., Grieser, J., Beck, C., Rudolf, B., & Rubel, F. (2006). World map of the Köppen-Geiger climate classification updated.
Medium Resolution Land Cover product, ESA. Land Cover CCI Product User Guide Version 2. Tech. Rep. (2017).
Lamarche, C., Santoro, M., Bontemps, S., d’Andrimont, R., Radoux, J., Giustarini, L., Brockmann, C., Wevers, J., Defourny, P. and Arino, O., 2017. Compilation and validation of SAR and optical data products for a complete and global map of inland/ocean water tailored to the climate modeling community. Remote Sensing, 9(1), p.36.
Still, C. J., Pau, S., & Edwards, E. J. (2014). Land surface skin temperature captures thermal environments of C3 and C4 grasses. Global ecology and biogeography, 23(3), 286-296.
List of the HRLC working group members: L. Bruzzone (UniTN), F. Bovolo (FBK), M. Zanetti (FBK), C. Domingo (CREAF), L. Pesquer (CREAF), K. Meshkini (FBK), C. Lamarche (UCLouvain), P. Defourny (UCLouvain), P. Gamba (UniPV), L. Agrimano (Planetek), A. Amodio (Planetek), M. A. Brovelli (PoliMI), G. Bratic (PoliMI), M. Corsi (eGeos), G. Moser (UniGE), C. Ottlé (LSCE), P. Peylin (LSCE), R. San Martin (LSCE), V. Bastrikov (LSCE), P. Pistillo (EGeos), I. Podsiadlo (UniTN), G. Perantoni (UniTN), M. Riffler (GeoVille), F. Ronci (eGeos), D. Kolitzus (GeoVille), Th. Castin (UCLouvain), L. Maggiolo (UniGE), David Solarna (UniGE).
Namibia is a semi-arid country with highly variable and unpredictable rainfalls. Extreme weather patterns such as floods or extensive droughts have increased in the past years with strong impact on surface and ground water availability, rangeland and agricultural productivity, food security and further land degradation such as bush encroachment or soil erosion. These conditions especially impact livelihoods in the northern communal areas as most people live in subsistence economy which is closely connected to hydrological conditions. The past 10 years were characterized through a perennial drought lasting from 2013 to 2016 and an extreme drought occasion during the rainy season of 2018/2019, which was the driest in 90 years. In contrast, January 2021 saw rainfall totals double to triple the norm. The poster presents a comparative analysis of five selected agricultural drought indices (VCI - Vegetation Condition Index; VHI - Vegetation Health Index; TCI - Temperature Condition Index; TVDI - Temperature Vegetation Dryness Index and DSI – Drought Severity Index) to identify, visualize, monitor and better understand the nature, characteristics and spatial-temporal patterns of drought in northern Namibia. The indices are based on freely available MODIS (Moderate Resolution Imaging Spectroradiometer) satellite imagery and value-added data products allowing calculation, time series analysis and cross-comparison based on their sensitivity towards vegetation greenness, land surface temperature and evapotranspiration. The indices are complemented and compared to climate reanalysis data (Copernicus ERA-5) for visualisation and analysis of rainfall patterns. The presented time series analysis covers a time span of 20 years (2001 to 2021), visualizing drought indices for the past 10 years following a seasonal approach. The study provides a better understanding of spatial drought patterns through identification of drought-prone areas in northern and central Namibia and bordering countries. Results show that droughts happen every year with vegetation-based indices showing similar spatial patterns but different levels of classified drought intensity. A longitudinal increase of index values from East to West and a latitudinal increase from North to South following the rainfall gradient can be observed. These highly correlate with precipitation reanalysis data. Combined indices based on evapotranspiration and land surface temperature show higher temporal and spatial fluctuations of drought intensities. It is concluded that a comparative analysis of multiple indices represents a better interpretation of drought than systems focusing on single parameters and combined drought indices are probably more suitable for arid and semi-arid areas than indices purely based on vegetation health. Future research should additionally incorporate biophysical properties such as soil characteristics, soil moisture and hydrology flanked by socio-economic investigations to establish an integrated drought index for northern Namibia.
Keywords: Remote Sensing, MODIS, Drought Indices, Time Series Analysis, Climate Reanalysis, Namibia
Nowadays the use of Machine Learning (ML) and Deep Learning (DL) are widely used in Earth Observation, especially for land cover classification. Different studies obtained slightly higher overall accuracies of DL than ML. However, the full potential of using machine learning, including the variety of algorithms and their calibration parameters, have not been fully addressed. Therefore, this research addresses a comprehensive discussion of the accuracies of ML and DL algorithms, in depth, in land cover classification. For this, radar data from Sentinel-1 images, with HH, HV, VV and VH polarizations were used as input variables in this study. The ML algorithms competed with each other through a Monte Carlo Cross-Validation (MCCV) calibration, so that the best algorithm found in the calibration (i.e., with the highest overall accuracy) was put in competition with the DL algorithm. The discussions, in this versus learning competition, focused on the global accuracies found, as well as the execution time obtained for both areas of artificial intelligence focused for the extension of the study area. Study area is located in northern Catalonia Spain, and classes such as crops, wetlands, urban areas, dense forest and scrubland were mapped in order to achieve spectral and classes variety. In addition, ground truth data from COPERNICUS and high-resolution images were used for validation of the obtained mappings. This article comes with a python package not yet available that was developed to implement several tools such as machine learning algorithms calibration through MCCV, Leave One Out Cross-Validation, Cross-Validation, etc., DL classifications, time series change detection, atmospheric correction, deforestation detection, land degradation mapping, among others algorithms embedded in the package. Finally, some remarks are given about pros and cons of using ML and DL in Earth Observation for land cover classification, as well as benefits of using radar imagery instead of optical imagery to map land cover.
Operational yield forecasting services are often based on regressive estimation models between official yields and agro-environmental variables, computed at the time of the forecast (Fritz et al., 2018). The relationship usually relies on historical series of statistical yields and one or more regressors, selected among meteorological data, crop simulation model or satellite-derived indicators.
The fitting between estimators and crop yields is highly variable across the agronomical seasons and the reliability with which these variables infer on yields depends, among many other factors, also on their aggregation in the space-time domains. For example, on the quality and representativeness of the utilized agricultural land cover masks.
In particular, the contribution of satellite-based indicators concerns the sensitivity they have on the combined agro-climatic, genetic, environmental and management effects on crop biomass activity. Nevertheless, remote sensing indicators show limits in their applications due to land cover maps availability (pixel selection - Liu et al, 2019) or due to the bias introduced when mixed-pixels are considered (low-resolution bias - Boschetti et al., 2004).
Recent studies (Weissteiner et al., 2019), proposed a semi-automatic approach to identify crop group-specific pure pixels (i.e., winter and spring crops and summer crops) at European scale, based on the implementation of a regional Gaussian Mixture Model (GMM) on MODIS–NDVI time series analyses. Such input could be used to improve the predictability of crop monitoring and yield forecasting applications, as it introduces a new and more detailed information layer to agricultural land cover.
This work focuses on the contribution of MODIS time series for regional yield estimation in Europe. We compared the linkage between yield and remote sensing indicators if generic arable land masks or crop-group specific information are applied when satellite data are aggregated at regional level. We regressed regional crop yields against smoothed daily NDVI temporal profiles, with the aim to address the following research questions:
(1) Is there any added value in the exploitation of yearly crop group-specific land cover for the estimation of crop yields?
(2) Can we take benefit all over Europe?
(3) Can we take benefit equally for both the identified crop categories?
Our study area includes all the European Union (EU) member states except for Finland and Malta (due to the low rate of arable land). We selected for each EU country the five most representative regions (Nomenclature des Unités territoriales statistiques - NUTS) in terms of arable land area according to Corine Land Cover (CLC) agricultural classes. Thus leading to a total of 97 NUTS2 regions, representing 72% the EU arable land. For each considered region, crop yield statistics NUTS-2 level were collected from official databases for the 2003-2019 period and used as reference data for computing regressions. Yield time series refer only to the prevalent crops inside each region: the main agricultural crops were first divided into two crop groups, namely Winter and Spring Crops (WSpCs) and Summer Crops (SCs), then the most representative crop of the two groups was chosen, according to the average extent of the cultivated area for each selected region. Yield statistics were checked for the presence of trends by means of a Mann-Kendal test (Mann, 1945; Kendall, 1975).
A collection of NDVI representative temporal profiles was derived for every selected region and year using MODIS MOD09GQ.006 daily product at 250-m spatial resolution. Single pixels NDVI time series were modeled by interpolating cloud free observations with a 5th degree polynomial fitting, while regional reference profiles were retrieved by averaging single pixels time series according to the information of five different land cover masks (Weissteiner et al., 2019):
i. ArLand: generic arable land mask, derived from CLC. It provides a static and generic land cover information without considering for annual variations or crop groups.
ii. Hist_Sc: historical crop group-specific mask for the SC group. Indicating high historical probability of SC cultivation for a given pixel. It provides a dynamic and crop group-specific land cover information.
iii. Hist_WSpC: it is the equivalent of the Hist_Sc mask, but for the WSpC group. It provides a static and crop group-specific land cover information.
iv. SC_year: yearly based crop group-specific mask for the SC group. Representing the pure pixels for the SC groups detected in a specific year. It provides a dynamic and crop-group specific land cover information.
v. WSpC_year: yearly based crop group-specific mask for the WSpC group. It provides a dynamic and crop-group specific land cover information.
A compared correlation analyses, between yield data and NDVI regional temporal profiles extracted from different land cover masks was performed. A linear regression model was assumed. The reference time stamp for regressions was of 10 days, from Day Of the Year (DOY) 60 to DOY 270. The R2 coefficient of determination was calculated to assess the strength of each relationship, together with the respective ρ-value to estimate its significance. The indicators Root Mean Squared Error (RMSE) and Mean Absolute Standard Error (MASE) were computed to assess the model errors in prediction.
Results were discussed in view of their applicability for regional-scale monitoring systems, with particular attention to the effects of crop group-specific land cover on the accuracy of yield estimation. A general improvement in correlation results was found when using yearly crop group-specific indicators with respect to generic and static ones. Improvements were found both in the accuracy and timeliness of predictions and were more evident at regional than EU scale level. The added value in the use of crop group-specific land cover layers included the two European most cultivated crops (i.e. grain maze and soft wheat). In particular, it was noticed for SC group a better performance in terms of prediction accuracy (higher R2 values), while concerning the WSpC group, advantages were more pronounced in terms of prediction timeliness (high R2 values earlier in time).
Bibliography:
Fritz, S., See, L., Bayas, J. C. L., Waldner, F., Jacques, D., Becker-Reshef, I., ... & Rembold, F. (2019). A comparison of global agricultural monitoring systems and current gaps. Agricultural systems, 168, 258-272.
Boschetti, L., Flasse, S. P., & Brivio, P. A. (2004). Analysis of the conflict between omission and commission in low spatial resolution dichotomic thematic products: The Pareto Boundary. Remote sensing of environment, 91(3-4), 280-292.
Kendall, M. G. (1975) - Rank Correlation Methods, 4th ed. Charles Griffin, London.
Liu, J., Shang, J., Qian, B., Huffman, T., Zhang, Y., Dong, T., ... & Martin, T. (2019). Crop Yield Estimation Using Time-Series MODIS Data and the Effects of Cropland Masks in Ontario, Canada. Remote Sensing, 11(20), 2419.
Mann, H. B. (1945) - Nonparametric tests against trend. Econometrica: Journal of the Econometric Society, 245-259.
Weissteiner, C. J., López-Lozano, R., Manfron, G., Duveiller, G., Hooker, J., van der Velde, M., & Baruth, B. (2019). A Crop Group-Specific Pure Pixel Time Series for Europe. Remote Sensing, 11(22), 2668.
Human induced land degradation has become a chief driver of poor ecological functioning and reduced productivity. The process of land degradation needs to be understood at various scales in order to protect ecosystem services and communities directly dependent on it. This is especially true for sub Saharan Africa, where socio economic and political factors exacerbate ecological degradation. This study aims to identify land change dynamics in the Copperbelt province of Zambia and unveil proximate causes and underlying drivers. Copperbelt is a densely forested province (falling in the central miombo ecoregion) with diverse ongoing and imminent land change processes such as shifting cultivation, charcoal production, logging, industrialization, mining, and extension of both urban and rural settlement. Specific to sub-Saharan Africa, many of the processes are superimposed by human-driven fire dynamics such as end of dry season fires. In our study, monthly time series of MODIS (MODerate resolution Imaging Spectroradiometer) derived enhanced vegetation index (EVI) values were used to derive three relevant parameters; harmonic series, annual peaking magnitude and annual mean growing season. We used a semi automated scheme to map land changes, where trend calculation was a statistical output, carried out with the help of additive decomposition and linear regression. A spatial filter was used to select only those pixels with minimum two significant trend patterns to enhance robustness of the approach and only consider regions with tangible change dynamics for further analysis. Trend maps were further integrated in a knowledge driven approach where additional data sources (socio economic, land cover, tree cover, bi temporal Landsat vegetation indices, high resolution Bing and Google imagery) were incorporated to provide spatial context and map the prevalent land change dynamics. Our observations were as follows: (a) trends of annually aggregated series were statistically more significant than those of monthly series, (b) weak trends were more dominant over strong trends, with weakly positive being the most prominent, (c) there was a clear spatial differentiation: 15% of the study area, dominant in the East, showed positive trends, 3%, dominant in the West, showed negative trends; (d) natural regeneration in mosaic landscapes was chiefly responsible for positive trends, (e) restorative plantations contributed to recovery of degraded cultivated areas, (f) mixed trends over forest reserves depicted timber and fuelwood harvest and (f) degradation over intact woodland and cultivation areas contributed to negative trends. In addition, lowered productivity within semi-permanent agriculture and a shift of new encroachment into woodlands from East to West of Copperbelt was observed. Although prominent in isolated spots, pivot agriculture was not a main driver for land changes at large. Concluding, greening trends prevailed over the study site, however the risk of intact woodlands being affected by various disturbances remains high.
Land cover maps are being increasingly produced. In particular, high-resolution land cover is taking over the medium and low-resolution products. The increase in production is on one side driven by the need for land cover information, and on the other side by favorable state of associated technologies (i.e., multiple high-resolution satellite missions, short revisit time, increased computational capabilities, etc.). Even though land cover production has increased, there are still some open issues regarding the land cover that needs to be addressed in order to facilitate land cover production. One of the issues is the collection of reference data for training and validation, which is especially challenging in the case of global high-resolution land cover. In most situations, such data are collected by photo-interpretation of Very High-Resolution (VHR) satellite imagery; rarely, the source of the reference data is an in-situ collection.
The objective of our work is to demonstrate how information from existing land cover datasets can be used as training data to produce the new land cover datasets and how accurate the outcomes are. The idea behind the work is that every land cover map aims at representing material on the Earth's surface as accurately as possible. When existing datasets are compared among themselves, the area in which they all show consistent information is the area with the highest probability that they are correct. Correctly classified pixels have a large probability to appear in the same location in the different datasets given that correct classification is a target in the classification procedure, and rarely a result of a random guess. On the opposite, the errors in the land cover might be a function of imagery type, preprocessing, training data, classification algorithm, etc. Since different land cover maps are produced with different procedures and input data, it is reasonable to assume that errors in the different datasets are random, i.e., not correlated. Therefore, if we intersect multiple land cover maps, the areas in which they share information can be used to extract training samples for deriving a new land cover map.
The workflow of our work comprises data preparation, dataset intersection, stratified random extraction of training samples, classification, and validation (Figure 1 in Illustrations file). The area of interest is an area of 38292 km2 distributed among 50 squares with the size of about 766km2 located in Central and Eastern Africa. The area of interest was selected based on the availability of validation samples that are needed for the final phase – accuracy assessment.
In this area, we collected existing high-resolution land cover (HRLC):
Following existing HRLC were used in our work:
• S2 prototype land cover 20m map of Africa 2016 (CCI Africa Prototype) at 20m resolution for the year 2016 with classes: Tree cover areas, Shrubs cover areas, Grassland, Cropland, Vegetation aquatic or regularly flooded, Lichens Mosses / Sparse vegetation, Bare areas, Built-up areas, Snow and/or Ice, and Open Water
• Forest / Non-Forest (FNF) at 30m resolution for the year 2017 with classes: Forest, Water, and Non-forest
• Finer Resolution Observation and Monitoring of Global Land Cover (FROM GLC) at 10m resolution for the year 2017 with classes: Cropland, Forest, Grassland, Shrubland, Wetland, Water, Tundra, Built-up, Bareland, Permanent ice, and snow and
• Global Human Settlement Built-Up Grid – Sentinel-1 (GHS BU S1NODSM) at 20m resolution for the year 2016 with classes: Built-up, and Non-built up
• GlobeLand30 (GL30) at 30m resolution for the year 2017 with classes: Cropland, Forest, Grassland, Shrubland, Wetland, Water, Tundra, Built-up, Bareland, and Permanent ice, and snow
• Global Surface Water (GSW) at 30m resolution for the year 2019 with classes: Permanent water, Seasonal water, and Not-water
All the datasets have high resolution (30m or better) that is similar to the resolution of the satellite imagery used in the classification. The datasets are produced by different producers, they refer to different years, they have different resolutions, classes, etc. To extract information that is coherent for all maps it was necessary to harmonize them in terms of the coordinate reference system, legend, and resolution. The coordinate reference system selected was WGS84 (EPSG:4326) and all datasets were reprojected to 10m resolution. The legends (pixel values and labels) of the existing classes were adapted to correspond to 5 - Shrubland, 7 - Grassland, 8 - Cropland, 9 - Wetland, 12 - Bareland, 13 - Built-up, 15 - Water, 17 - Permanent ice and snow, 20 – Forest. Then, all data were intersected and only areas in which they show coherent information were extracted. The map obtained in this way we named map of agreement and it accounts for 20% of the area of interest. Most of the classes available in the region were also available in the map of agreement i.e., Forest, Grassland, Cropland, Water, Built-up, Shrubland. However, there were only a few samples of Bareland and Wetland class available – (22 and 6 pixels respectively), and samples of Permanent ice and snow were not present at all since the only dataset containing this class in the area of interest is GlobeLand30.
The training set was created by extracting around 8000 samples per class from the map of agreement, except for classes Bareland and Wetland for which the number of pixels available was lower than this threshold, so for these classes all the available samples were taken into account.
Data preparation and extraction of the training samples were done using GRASS GIS and Python on CINECA High-Performance Computing (HPC).
The extracted samples were used for the classification of the Sentinel-2 and Planet's NICFI (Norway's International Climate and Forest Initiative) Basemap for 2017 with a Random Forest classification algorithm. For this purpose, Google Earth Engine (GEE) was used because both imageries are already available in the Earth Engine Data Catalog, but to access Planet’s NICFI data it was necessary to sign up and accept terms of use. In the case of Sentinel-2, two tests were made – one with using only Red, Blue, Green and NIR band (hereafter called test Sentinel-2 4B), and one with using all bands at 10 m and 20 m resolution (hereafter called test Sentinel-2 allB). In case of Planet’s NICFI data only Red, Blue, Green, and NIR bands were available and used (hereafter called test Planet 4B).
Finally, the validation dataset was created by photo-interpreting VHR imagery in 2400 sample locations. The photo-interpretation was done by using Open Foris, Collect Earth, and Google Earth tools. 1300 samples were extracted in the area where the map of agreement has valid values, and 1100 in other areas within the area of interest. For some of the points, the photo interpreter was not completely confident about the label assigned, therefore such samples were discarded during validation. Finally, the validation was carried out based on 1683 highly confident samples distributed as 1050 in the map of agreement and 633 in other areas within the area of interest.
The results of the 3 different classification tests yielded Overall Accuracy (OA) of 70% in the case of Planet 4B test, 67% in the case of Sentinel-2 4B test, and 74% in the case of Sentinel-2 allB test. The error matrix and associated accuracy indexes – User’s accuracy (UA) and Producer’s accuracy included in the Illustrations file as Table 1. The classes with the highest accuracy are Water, Built-up, and Forest, while in the case of Grassland, Shrubland, and Cropland the accuracy is moderate. For the classes of Bareland, and Permanent ice and snow the accuracy is 0, but it is also based on a very small number of samples, therefore it is not enough representative.
The use case presented here is a demonstration of how the existing data can be reused to obtain new land cover data. The accuracy of 74% is sufficiently satisfactory given that the time invested into training data extraction is significantly reduced with respect to typical procedures (i.e., photo-interpretation) for the same number of samples. Furthermore, global land cover changes are only a few percent per year, therefore it is safe to use existing land cover several years older than the baseline year of the land cover that is to be produced. One limitation of the approach is that some classes that are effectively present on the ground, are not present in the map of agreement, and therefore also in the training dataset. This is typical for the small classes such as Permanent ice and snow, or others depending on the area. However, even if not all classes can be taken into account by this approach, it significantly reduces the efforts needed for collecting training data.
In our next steps, we are going to investigate different sources of information for the classes that are currently missing in the training dataset of the area of interest (i.e., Bareland, Wetland, and Permanent ice and snow).
Accessibility to raw materials, cheap labour and lenient labour laws make rural areas attractive to many industries in West Africa. The set-up of small-scale solid mineral industries is popular in rural West Africa. These industries are labour intensive and require small to large areas of land. This is just one of the examples of industrialization taking place in rural areas. Nigeria is well known for its vast oil reserves, which in turn creates a lot of employment opportunities, especially for low-skilled workers, since many of the reserves are in rural areas. Ghana's southern western region has a wealth of gold, which has caused small-scale industries to spring up and led to an influx of people from more rural areas. In combination with proximity to mineral resources, this has led to rural industrialization. This can be seen in the increase in the number of people in an area which indicates an influx of migrants. When this happens there's an upsurge in migration to rural areas, pressure on land and water resources from agricultural activities, which affects the livelihood of migrants. This study seeks to identify migrants' behaviours to move to rural industrial areas in Ghana and Nigeria using remote sensing proxies. The method will use several remote sensing products such as Landsat, Copernicus datasets, Hansen Global Forest dataset, WorldPop and JRC-Global Human Settlement Layer dataset. The Random Forest classifier will be used to generate a Landcover map of the selected areas with Copernicus and Landsat datasets. The expected result will have the potential to demonstrate that Copernicus data, World Pop and Hansen Forest Cover data can be a useful proxy for population and migration studies. Moreover, the monitored significant changes in land use and land cover in the industrial areas compared over the past 20 years reveal certain trends of the industrialization era in Western Africa. The research has the capabilities of producing effective and accurate methods for identifying the pull effects of industries in rural areas. This is essential for the implementation of policies for improved infrastructure, improved labour laws, good health and decent wages.
Global land cover mapping has aided monitoring of the complex Earth’s surface and provided vital information to understand the interactions between human activities and the natural environment. Most of the global land cover products are provided with discrete classes, indicating the most dominant land cover class in each pixel. Fraction mapping, which expresses the proportion of each land cover class in each pixel, is able to characterize heterogeneous areas covered by multiple land cover classes. However, land cover fraction maps have shown unrealistic inconsistent year-to-year changes, which makes it difficult to detect robust trends. To obtain more accurate and reliable fraction maps, temporal information can be used to correct false changes and improve the consistency of time series.
In this study, such an approach is implemented by using a Markov chain model as a postprocessing step. Based on Landsat 8 imagery and Random Forest (RF) regression, initial fraction maps are created on a global scale for the years 2015 until 2018, following the approach of Masiliūnas et al. (2021). The RF-regression model is trained on over 150,000 reference points, provided by the Copernicus Global Land Service Land Cover project (CGLS-LC100) (Buchhorn et al., 2020). A Markov chain model is then applied on the fraction maps to smooth the time series. The transition probabilities of the Markov chain model are trained on over 30,000 reference points that contain multitemporal fraction data. Moreover, a recurrent RF-regression model, which also incorporates temporal information, has been implemented as a stronger baseline method.
An accuracy assessment has been executed with a subpixel confusion-uncertainty matrix to check the performance of the models’ output. All fraction maps are validated on over 28,000 reference points that contain multitemporal fraction data, and also account for possible change areas (Tsendbazar et al., 2021). All the land cover fraction reference data are provided by the CGLS-LC100 project. The fraction maps obtained by the Markov chain model are compared to the fraction maps produced by the recurrent RF-regression model.
Based on promising results in other studies, it is expected that a Markov chain model will significantly improve the accuracy and consistency of the land cover fraction maps, with less year-to-year unrealistic changes. These anticipated results could stimulate the use of fraction maps for detecting gradual land cover changes over time, which would be relevant for monitoring forests, biodiversity and land degradation.
Buchhorn, M., Lesiv, M., Tsendbazar, N.-E., Herold, M., Bertels, L., & Smets, B. (2020). Copernicus Global Land Cover Layers—Collection 2. Remote Sensing, 12(6), 1044. https://doi.org/10.3390/rs12061044
Masiliūnas, D., Tsendbazar, N.-E., Herold, M., Lesiv, M., Buchhorn, M., & Verbesselt, J. (2021). Global land characterisation using land cover fractions at 100 m resolution. Remote Sensing of Environment, 259, 112409. https://doi.org/10.1016/j.rse.2021.112409
Tsendbazar, N.-E., Tarko, A., Li, L., Herold, M., Lesiv, M., Fritz, S., & Maus, V. (2021). Copernicus Global Land Service: Land Cover 100m: version 3 Globe 2015-2019: Validation Report. Zenodo. https://doi.org/10.5281/zenodo.4723975
An Earth Observation approach for monitoring and mapping the spatial distribution of bird habitats around the Irish Sea
Walther C.A. Camaro Garcia1,2, Fiona Cawkwell1
1. Geography Department; University College Cork (UCC), Cork, Ireland
2. MaREI, the SFI Research Centre for Energy, Climate and Marine; Environmental Research Institute (ERI), University College Cork (UCC), Ringaskiddy, Co. Cork, Ireland
The Irish Sea climate is changing, in line with regional and global trends, presenting a threat to resident and migratory marine species whose conservation depends on the preservation and maintenance of coastal habitats.
As a response to those challenges, the ECHOES (Effect of climate change on bird habitats around the Irish Sea) project, funded by the European Regional Development Fund (Ireland Wales INTERREG Programme), seeks to address how climate change will impact coastal bird habitats of the Irish Sea, and what effect this could have on the society, economy, and shared ecosystems in both Ireland and Wales.
Satellite imagery are a key source of data for monitoring and mapping the spatial distribution of key habitats for the Greenland White fronted geese and Eurasian Curlew.
Initially, a time series of cloud-free Sentinel-2 images covering the four seasons from Autumn 2019 to Summer 2020 were identified for the study sites on the west coast of Wales and south east coast of Ireland. Three radiometric indices were calculated to capture the habitat distribution and dynamics. The Normalized Difference Vegetation Index (NDVI – Sentinel-2 bands 4 and 8) highlights the condition and the seasonal variation of the vegetation. The Structure Insensitive Pigment Index (SIPI – Sentinel-2 bands 1, 4 and 8) is used to identify the high variability in the vegetation structure for some particular habitats. Finally, the Normalized Difference Water Index (NDWI – Sentinel-2 bands 3 and 8) is used to identify the presence of water in the estuary areas linked to the tidal variation. Using field points of the key habitats, a random forest (RF) classification was run for the image stack for each site and independently validated with additional field information.
In order to capture change in the habitats over the last 20 years, a time series of Landsat and Sentinel-2 were identified using the same criteria as previously. The radiometric indices used in the first phase of this study were calculated for the time series and the per-pixel trajectories plotted. Land cover changes were classified into temporary change (for example due to tidal state) and directional change, with the latter further divided into abrupt change (e.g. conversion of wetlands into agriculture) and trend change (e.g. tree growth over years) for each of the different indices. A variety of different statistical approaches were explored to determine the dynamics of the different study areas of the past two decades.
Monitoring land-cover change dynamics is a crucial but challenging task for understanding and minimizing the anthropogenic impact on climate change and ecosystems biodiversity. Existing long-time series of remote sensing data provide relevant information to observe land-cover change worldwide. Most of the current state of the art methods for land-cover change detection relies on comparing pre and post-change land cover maps. Due to noise in both the pre and post map, those approaches are only relevant to provide change statistics over vast areas. Even though those statistics are essential to understand the change tendency, they cannot be used to obtain change location at a precise spatial level, which is vital for local ecosystem management. Therefore, providing spatially accurate land-cover change maps requires considering some temporal constraints between acquisitions to avoid false alarms or missing detections. Especially differences in atmospheric conditions or acquisitions configurations between different acquisitions substantially impact optical images and, consequently, change detections, introducing specific border effects.
This paper addresses the use of different methodologies to reduce land -cover change detection errors. First of all, to consider the internal spatial variability of some land-cover classes, like logging forest or degraded forest, we use an auto contextual approach (Derksen 2020) based on multi-scale SLIC segmentation coupled with random-forest supervised classification. In addition, replacing standard wall to wall land-cover change map production that introduces some border artefact in the change map, we use posterior confidence of the random-forest classes. The confidence change map is obtained by crossing the random-forest confidence of one class for one period to other classes in other periods. Such a confidence change map allows for the application of relevant thresholds, reducing any change artefact.
Experiments are conducted on a region near Cotriguacu municipality in Mato Grosso state in Brazil near one deforestation front of the Amazonian forest. This area reveals to be especially suited to demonstrate the interest of the approach as it contains different states of forest degradation linked to a 3 stage deforestation process for cattle farming that can last over several months: the forest is firstly burnt, then part of trees are removed, and finally, the area is cleared.
The method is experimented on cloud-free Sentinel-2 data from 2018 to 2020 to compare results between different intervals and catch all the main changes in the area over the concerned period. Results demonstrate an interest in using multi-scale SLIC segmentation to catch in the heterogeneous classification classes such as strong disturbed forest and logged forest. In addition, change analyses provided by crossing the posterior confidence perform well to adjust change detection accuracy and reduce border effect.
Ref: Derksen, Dawa et al. “Geometry Aware Evaluation of Handcrafted Superpixel-Based Features and Convolutional Neural Networks for Land Cover Mapping Using Satellite Imagery.” Remote. Sens. 12 (2020): 513.
In the framework of the ESA-funded research project entitled SInCohMap “Sentinel-1 Interferometric Coherence for Vegetation and Mapping” (sincohmap.org), undertaken from 2017 to 2020, a large number of tests and options were analyzed regarding the exploitation of the interferometric coherence derived from Sentinel-1 data in multitemporal land cover and vegetation mapping.
It was demonstrated that time series of coherence provide information complementary to backscatter intensity, hence contributing to improve classification both alone and in combination with intensity. Moreover, the coherence measured at VV channel contributes more than the coherence measured at the VH channel, contrarily to backscatter, so the usage of both polarimetric channels is recommended in classification. As a third key conclusion from that project, it was found that the shortest temporal baseline (i.e., 6 days) outperforms the rest of possible temporal baselines and, in addition, there is no significant improvement in the results when more temporal baselines are added to the 6-day one as input features.
All these conclusions were drawn from experiments carried out in three different test sites with diverse class sets and distributions: South Tyrol alpine environment (Italy), Doñana wetland and crops environment (Spain), and West Wielkopolska forest/agricultural/urban environment (Poland). Moreover, a specific study case on crop-type mapping was performed (Mestre-Quereda et al. 2020). Experiments included many different classification algorithms and strategies, as detailed in Jacob et al. (2020), which demonstrate the robustness of the project outcomes.
That project is currently being extended by exploring 3 new aspects related to the same topic:
A) Added value for classification of the combination of both ascending and descending acquisitions, since they offer different observation geometry of the same scene as well as different acquisition times (e.g., 6 am vs 6 pm in Europe).
B) Added value of the combination of Sentinel-1 coherence with Sentinel-2 optical imagery.
C) Potential usage of 6-day Sentinel-1 coherence for forest mapping and characterization in temperate and boreal regions.
Based on the results obtained with these experiments, recommendations will be presented for obtaining maximum performance in land cover and vegetation mapping by multi-track Sentinel-1 (ascending and descending) and by combination of Sentinel-1 and Sentinel-2 data.
References
A. Jacob, et al. “Sentinel-1 InSAR Coherence for Land Cover Mapping: A Comparison of Multiple Feature-Based Classifiers,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 13, pp. 535-552, January 2020.
A. Mestre-Quereda, et al. “Time Series of Sentinel-1 Interferometric Coherence and Backscatter for Crop-Type Mapping,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 13, pp. 4070-4084, July 2020.
Callisto platform provides a highly interoperable Big Data platform between DIAS infrastructures and Copernicus users, where the outcomes of optimized-on-HPC machine learning solutions on satellite data are semantically indexed and linked to crowdsourced, geo-referenced and distributed data sources, and served to humans in Mixed Reality environments, allowing virtual presence and situational awareness in any desired area of interest, augmented by Big Data analytics from state-of-the-art and scalable Deep Learning solutions. Callisto integrates Copernicus data, already indexed in a standard way on DIAS platforms such as ONDA-DIA and utilise HPC infrastructures for enhanced scalability. Complementary distributed data sources involve Galileo signals from mobile applications and from video recordings on Unmanned Aerial Vehicles (UAVs), Web and social media data and in situ data is also available at Callisto platform. On top of all these data sources, Artificial Intelligence (AI) technologies are applied to extract meaningful knowledge such as concepts, change detection, activities, events, 3D-models, videos and animations of the user community. AI methods are also executed at the edge, offering enhanced scalability and timely services.
In the frame of Callisto project, there are 4 use cases. In one of them (PUC 4) the European Union Satellite Centre (SatCen) is responsible for the development and implementation of a novel model framework for Land Border Change Detection. In this use case, will be defined an Area of Interest (AOI) with eight provisional segments as potential geographic zones for continuous monitoring (the AOI is not a limited 10x10 km image footprint, but rather the whole EU land border, where signals are continuously collected for exploring relevant change). PUC4 will introduced a cueing approach in Imagery Intelligence (IMINT) Copernicus services, allowing Activity-Based Intelligence (ABI) to operate at scale, discovering patterns (i.e. events) in several satellite imagery datasets using Machine Learning algorithms which will be able to recognise “relevant” and “non-relevant” land changes, based on proper user-centric definition. If the signals are considered critical, follow-up analysis can take place using VHR images provided by satellites (through DIAS infrastructure) or UAV, whose spatial resolution allows a more precise recognition of objects. Moreover, semantic technologies (Semantic Indexing, Geolocalisation in text, semantic search…) will be applied to those areas in order to extract meaningful information and provide an added value.
In terms of temporal analysis, the land changes can be observed during a multi temporal gap analysis (more than 1 year) or short term (less than 1 month) in order to identify relevant change detection patterns. The main results will be 3 different outputs, namely Product 1 (Rasterised relevant change detection probability layers at EU external borders, based on Sentinel-2 data. The layers can be updated, as new acquisitions are obtained at various locations and deliver a dynamic overview of the activity detected); Product 2 (Relevant land change detection alerts delivered to the user by various apps (e.g., email, WhatsApp message, etc.), as new “events” are detected. The user can adjust the sensitivity levels for the alerts according to the area) and Product 3 (Based on validated alerts, the system will generate and propose a flight plan for a future UAV mission, seamlessly integrating satellite and drone surveillance for improved awareness).
Forest species maps have great potential in the scope of the forest management, for enhancing forest inventory estimates and may be used as auxiliary information to construct new thematic maps or support the application of species-specific regression models. The use of remotely sensed data facilitates construction and updating of these maps at different scales. Despite the generalized use of these maps, the uncertainty associated with them and the consequences of that uncertainty are often ignored.
The goal of this study was to estimate the effects of the uncertainty of forest species maps. A forest species map representing the six main forest species (Fagus sylvatica, Pinus halepensis, Pinus nigra, Pinus sylvestris, Quercus pyrenaica/faginea, and Quercus ilex) of La Rioja (Spain) was constructed using random forests models, spectral data from Landsat and auxiliary information. To estimate the uncertainty of the map, bootstrapping techniques were implemented. Each new forest species map, (one for each bootstrap iteration), was compared with the original map to determine the population units for which the predicted forest species changed and those that retained the original classification.
The percentages of population units whose predicted forest species did not change over the 2000 bootstrap iterations were calculated and designated as the percentage of stable pixels or pixel stability. The standard errors (SE) for the area estimates were generally less than 10% with the exception of Pinus halepensis which reached a SE of 20%. Greater SE estimates were at least partially attributed to species with less frequent occurrence among the six main forest species analyzed and their more open distributions. The percentages of stable pixels were strongly positively correlated with the SE estimates. Among all species, more than 80% of the pixels were always classified as the same forest species, although for Pinus halepensis and Pinus nigra, just 67% and 79% of the pixels remained stable.
The results of this study demonstrated that the effects of uncertainty in the forest species map are not negligible, and ignoring the effects could jeopardize the reliability of the products derived.
Identifying recent surface dynamics in the Namib Desert (Namibia) using Sentinel-1, Sentinel-2 and Landsat time series
Tobias Ullmann(1), Eric Möller(1), Felix Henselowsky(2), Bruno Boemke(3), Janek Walk(3), Georg Stauch(3)
(1) Institute of Geography and Geology, University of Würzburg, Am Hubland, D-97074 Wuerzburg, Germany
(2) Institute of Geography, Johannes Gutenberg-University Mainz, Johann-Joachim-Becher-Weg 21, D-55099 Mainz, Germany
(3) Department of Geography, RWTH Aachen University, Templergraben 55, D-52056 Aachen, Germany
_________________
With about 40% of the world’s land area and more than 30% of the world’s population, drylands are identified as one of the most important environments of our planet. At the same time, deserts and desert margins react particularly strongly to climatic changes, while sensitivity and response times are largely unknown. Despite the fundamental absence of water in arid regions, rare but strong rainfall events act as important drivers for geomorphological activity, expressed in sediment erosion, transport, and deposition. However, such events and the induced surface dynamics are difficult to capture in space and time due to their inherent heterogeneity and the complexity of the involved process-response systems. In this context, arid environments are suitable targets for earth-observation-based research, as they are usually characterized by low anthropogenic disturbance and low cloud coverage. Therefore, satellite imagery provides information on the Earth’s surface and its landforms offering the unique opportunity to visualize, recognize and, potentially, quantify surface changes over time and for vast areas. The Sentinel Missions thereby mark a new epoch, as they allow to study processes in arid regions at high spatial and uniquely high temporal resolution. At the same time these missions allow to employ passive multispectral and Synthetic Aperture Radar (SAR) imagery synergistically, which opens new and promising perspectives for research, especially in the field of arid geomorphology.
This study presents first results on the characterization of recent surface dynamics in the Namib Desert via the joint analyses of time series of Sentinel-1/2, the Landsat archive and in situ records. Investigations focus on the Kaukausib Catchment (southern Namib), which is located at the transition between tropical and extratropical climate influences. In this region, extraordinary rainfall events can lead to morphodynamics of high magnitude, coupled with tremendous changes in vegetation cover for short periods. The occurrence and spatial dimension of such events were revealed by a Google-Earth-Engine-based analysis of the entire Landsat archive. Preliminary results indicate an event-frequency of around 6 to 11 years over the last 35 years and further point to changes in the periodicity over time, with a shift towards lower frequencies for the last decade. Focusing on the most recent event in 2018, time series of spectral indices of Sentinel-2, of SAR intensity and interferometric (InSAR) coherence of Sentinel-1 were analyzed to locate and map morphodynamic activity within the catchment at high spatial and temporal resolution. Preliminary results reveal a clear response of several remote sensing features to morphodynamic activity, e.g. a significant drop of InSAR coherence is found over active channels, which allows to identify active drainages and, eventually, to trace the connectivity of morphological provinces/units within the catchment.
These initial results highlight the added value of remote sensing products for identifying short- and medium-term surface processes. The latest generation of Earth observation products holds high potential for improving the understanding of geomorphological/geomorphic frequency-magnitude relationships in arid regions under global climate change.
The use of remote sensing data operating in different observation domains is an undeniable asset for the realization of quality land over products.
Indeed, satellites allow to cover large areas of interest in a regular way with a durable quality. As a consequence, research laboratories are now massively exploiting these data, which offer new possibilities, particularly by exploiting long time series.
Satellite data can be of different but often complementary natures, which makes it possible to broaden the possible fields of application (water management, snow cover, crop yield, urbanization, etc.).
In addition to these new data, there are recent technological developments (or old but now usable due to the evolution of computing capacities, such as the use of neural networks), and means of service provision and dissemination that allow these applications to be carried out over a longer period of time (long time series that are computed more rapidly) and in a larger space at different scales, sometimes simultaneously (stationary, local, national, continental, global scale).
iota2, developed by CESBIO and CNES with the support of CS GROUP, is a response to the growing demand for the creation of an Open Source tool, allowing the production of land cover maps at a national scale that is sufficiently generic to be adapted to the different objectives of users.
In addition, this project ensures the production of an annual land use map of metropolitan France [REF https://doi.org/10.3390/rs9010095], with a satisfactory level of quality, thus proving its operational capacities.
iota2 integrates several families of supervised algorithms used for the production of land use maps. Supervised algorithms (e.g., Random Forests or Support Vector Machine) that process pixels that can be parametrised by the users through a simple configuration file. iota2 also offers the user the option of using a deep learning model.
In addition to the pixel approaches, contextual approaches are also proposed, with Autocontext [1] and OBIA (Object Based Image Analysis). Autocontext, based on RF, takes into account the context of a pixel in a window around its position. The OBIA approach exploits an input segmentation to classify objects directly.
In addition to the supervised classification approaches, iota2 is also able to produce indicator maps (biophysical variables) either by supervised regression or by using user-provided processors, diversifying the possibilities of using iota2.
One major interest in iota2 is it's ability to deal with a huge amount a data, for instance the OSO product (https://theia.cnes.fr/atdistrib/rocket/#/collections/OSO/2327b748-a82c-5933-afb0-087bbfeff4cd) is generated using a stack of all available Sentinel-2 data over the France without any landscape discontinuity due to the Sentinel-2 grid. This ability is possible thanks the use of OTB, which is a high performance library dedicated to remote sensing algorithm developed by the CNES (French national space agency) and CS GROUP. Another point of interest is its capability to produce a land cover map everywhere a Sentinel-2 data and a ground truth are available (ie : https://agritrop.cirad.fr/597991/1/Rapport_Intercomparaison_iota2Moringa.pdf).
1. Derksen, D., Inglada, J., & Michel, J. (2020). Geometry aware evaluation of handcrafted superpixel-based features and convolutional neural networks for land cover mapping using satellite imagery. Remote Sensing, 12(3), 513. http://dx.doi.org/10.3390/rs12030513
The advent of openly available Landsat and Sentinel data has democratized the field of land cover classification, as is evidenced by the rapidly growing corpus of accurate high-resolution land cover products. We present our contribution to this field: A complete framework that explores the boundaries of what is possible with open data and open source software, aspiring to generate land cover predictions that are as useful as possible to as many users as possible. We do this by classifying land use/land cover with a large legend (43 classes) on a large time-series (20 consecutive years). The framework consists of 1) a analysis-ready spatiotemporal data cube of Europe, spanning 20 years at 30m resolution; 2) over 8 million harmonized land cover training points derived from LUCAS and CORINE land cover data; 3) A spatiotemporal ensemble machine learning workflow mapping 43 land cover/land use classes for every year between 2000 and 2020 at 30m resolution. The workflow includes hyperparameter optimization, spatial 5-fold cross-validation, and validation on an independent stratified sample-derived dataset. Our model outputs probabilities and uncertainty per class, all of which are openly available for customized use-cases. We showcase how these probabilities can be translated into trends that show gradual long-term land cover/land use dynamics instead of only relying on hard class maps. The entire workflow is implemented by the new open source eumap python package available at gitlab.com/geoharmonizer_inea/eumap, while all land cover probabilities, classifications, and auxiliary data are available through the Open Data Science Europe Viewer at maps.opendatascience.eu.
Our strict accuracy assessment indicates that classifying 43 mixed land use/land cover classes remains a difficult task, which is illustrated by the much higher performance when the predicted classes are aggregated to a legend that is more optimized for remote sensing-based classification tasks. In the talk, we will describe how we preprocessed the 200+ covariates, how we created our training dataset, and the design of our machine learning workflow. Lastly, we will discuss the shortcomings, possible solutions, and future ambitions for this evolving project.
Validation is an integral part of any Earth Observation (EO) based map production chain and is the primary information available to the map users providing details about the quality and applicability of the map to their specific application. Today, the Copernicus Sentinels, as well as other global monitoring platforms, are used to measure different variables and develop a wide range of indicators. From Essential Climate Variables (ECV) to Sustainable Development Goals (SDG) to Key Landscapes for Conservation (KLC), everyone now has access to free and open EO imagery as well as tools to create maps at the different scales and focusing on their own areas of interest and domains of interest. This map democracy comes at a price because as EO practitioners know, not all maps are created equally well. Furthermore, the development of novel techniques including the Machine Learning (ML) and Artificial Intelligence (AI) paradigms make it not only easy to produce high quality maps but also produce fakes. Such black-box processing techniques may not always allow map users to understand how the products they are using were produced which is not necessary per se. However, it is imperative that the quality of the information made available through the map product is well documented and properly quantified.
This has led to the development of a validation framework (https://doi.org/10.1080/22797254.2021.1978001) based on the Copernicus High-Resolution Hot Spot Monitoring activity (C-HSM) that is delivering global datasets of Key Landscapes for Conservation (KLC) for specific sites which are characterized by pronounced anthropogenic pressures that require high mapping accuracy. Furthermore, the importance of evaluating, assessing and quantifying changes in the land cover is one of the most important functions of EO based map making and one of the main drivers behind sustainable development policies. Validation and map quality are fundamental in understanding and quantifying the level of changes across different types of landscapes around the world due to anthropogenic pressures. Measuring the degradation/restauration of landscapes is based on variations in both time and space and therefore adds another level of complexity to the issue of trust in the maps we use.
The quality assurance and assessment framework was developed to support EO based maps destined for policy and decision making. The reality is that not all maps can undergo such rigorous validation and accuracy assessments. This talk will explore and discuss the current state-of-the-art in quantitative accuracy assessments in the context of digital map production together with insights into the needs by both human and machine map producers and users, taking into account the goals for which the map will be applied.
Validation is an integral part of any Earth Observation (EO) based map production chain and is the primary information available to the map users providing details about the quality and applicability of the map to their specific application. Today, the Copernicus Sentinels, as well as other global monitoring platforms, are used to measure different variables and develop a wide range of indicators. From Essential Climate Variables (ECV) to Sustainable Development Goals (SDG) to Key Landscapes for Conservation (KLC), everyone now has access to free and open EO imagery as well as tools to create maps at the different scales and focusing on their own areas of interest and domains of interest. This map democracy comes at a price because as EO practitioners know, not all maps are created equally well. Furthermore, the development of novel techniques including the Machine Learning (ML) and Artificial Intelligence (AI) paradigms make it not only easy to produce high quality maps but also produce fakes. Such black-box processing techniques may not always allow map users to understand how the products they are using were produced which is not necessary per se. However, it is imperative that the quality of the information made available through the map product is well documented and properly quantified.
This has led to the development of a validation framework (https://doi.org/10.1080/22797254.2021.1978001) based on the Copernicus High-Resolution Hot Spot Monitoring activity (C-HSM) that is delivering global datasets of Key Landscapes for Conservation (KLC) for specific sites which are characterized by pronounced anthropogenic pressures that require high mapping accuracy. Furthermore, the importance of evaluating, assessing and quantifying changes in the land cover is one of the most important functions of EO based map making and one of the main drivers behind sustainable development policies. Validation and map quality are fundamental in understanding and quantifying the level of changes across different types of landscapes around the world due to anthropogenic pressures. Measuring the degradation/restauration of landscapes is based on variations in both time and space and therefore adds another level of complexity to the issue of trust in the maps we use.
The quality assurance and assessment framework was developed to support EO based maps destined for policy and decision making. The reality is that not all maps can undergo such rigorous validation and accuracy assessments. This talk will explore and discuss the current state-of-the-art in quantitative accuracy assessments in the context of digital map production together with insights into the needs by both human and machine map producers and users, taking into account the goals for which the map will be applied.
In the context of climate change, land cover maps are important for many scientific and societal applications.
Nowadays, an increasing number of satellite missions generate huge amount of free and open data. For instance,
the Copernicus Earth Observation program with the Sentinel-2 mission provides satellite image time series
(SITS) at high resolution (up to 10m) with a high revisit frequency (every 5 days). Sentinel-2 sensors acquire
13 spectral bands ranging from the visible to the shortwave infrared (SWIR) wavelengths. At the scale of a
country like Metropolitan France, one year of acquisitions corresponds to around 15 TB of data. These SITS
which combine high temporal, spectral and spatial resolutions provide relevant information about vegetation
dynamics. By using machine learning algorithms, SITS allow the automatic production of land cover maps
over large areas [1]. Although the state-of-the-art Random Forests (RF) classifier provides good classification
accuracy, it does not take into account the spatio-spectro-temporal structure of the data, e.g., modifying the
order of the temporal acquisitions would not change the prediction of the RF.
Gaussian processes (GP) are Bayesian kernel methods which allow to encode the prior knowledge of the data
structure using a kernel function [2]. While GP are widely used in geospatial data analysis, they have seldom
been used in SITS analysis. Recent studies demonstrated the effectiveness of GP regression for vegetation
parameter retrieval on limited areas [3], [4]. Indeed, their original formulation scaled poorly with the data size
(e.g. GP involves operations that scale cubicly with the number of training samples) and thus alleviates their
use on larger areas [5].
In the last decades, sparse and variational techniques have been successfully proposed in computer vision to
leverage these computational issues [6], [7]. By introducing a small set of inducing variables, sparse methods
allow to approximate the model and thus reduce the complexity. Futhermore, variational methods use a variational
lower bound in order to optimize the locations of the inducing points. These methods combined with stochastic
gradient descent allow GP to scale to very large data sets, both for regression and classification [8].
In this work, we will evaluate the performance of variational sparse GP for SITS classification at country scale
as compared to RF. The investigated GP model, proposed by [8], is based on a multi-output regression where
GP are linearly combined and transformed by the softmax observation model to probability class memberships.
Stochastic variational methods are used to learn all the model parameters. We used a sum of kernel functions
to account for the spatial features in addition to the spectro-temporal features. The optimal weights between
spatial and spectro-temporal features are found automatically during the learning process.
To assess the performances between GP and RF, experiments have been conducted on 27 Sentinel-2 tiles on
the South of the France. All available acquisitions for 2018 have been linearly resampled onto a common set
of virtual dates with an interval of 10 days [1]. For each pixel, 10 spectral-bands with 10m ground sampling
distance and 3 spectral indices (NDVI, NDWI, brightness) were used. The volume of data corresponds to around
5 TB. The reference data is composed of 23 land cover classes divided in 8 ecoclimatic regions as described
in [1]. For each ecoclimatic region, we splitted the data in 3 different datasets: training, validation and testing.
The number of samples per class was balanced (4 000 samples per class for the training set, 1 000 for the
validation and 10 000 for the testing). We repeated the procedure 11 times with different datasets to make sure
performance results were correctly evaluated. Classical classification metrics such as overall accuracy, kappa
and fscore were used.
First, we compared the performance of training an independent model for each ecoclimatic region (stratifica-
tion) against training a unique model on the full area. Then, we assessed the effect of taking into account the
spatial information in the classification model. For RF, longitude and latitude were used as additional features.
Concerning GP, they used a sum of kernel functions as described above.
In all configurations, the overall accuracy of the GP model was 2 points above the RF model (i.e. 0.78 vs
0.76). The stratification (training independent models) gave better performance results than training an unique
model (overall accuracy is 1 point above for both GP and RF). Finally, adding spatial information increased
the overall accuracy of 1 point for the RF and around 2 points for the GP. The results showed that Gaussian
Processes can take better account of the spatial correlation. Results with larger datasets will be presented at the
conference.
REFERENCES
[1] J. Inglada, A. Vincent, M. Arias, B. Tardy, D. Morin, and I. Rodes, “Operational High Resolution Land Cover Map Production at the
Country Scale Using Satellite Image Time Series,” Remote Sensing, vol. 9, p. 95, Jan. 2017. Number: 1 Publisher: Multidisciplinary Digital Publishing Institute.
[2] C. E. Rasmussen and C. K. I. Williams, Gaussian processes for machine learning. Adaptive computation and machine learning, Cambridge, Mass: MIT Press, 2006. OCLC: ocm61285753.
[3] J. Estévez, J. Vicent, J. P. Rivera-Caicedo, P. Morcillo-Pallarés, F. Vuolo, N. Sabater, G. Camps-Valls, J. Moreno, and J. Verrelst, “Gaussian processes retrieval of LAI from Sentinel-2 top-of-atmosphere radiance data,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 167, pp. 289–304, Sept. 2020.
[4] J. Verrelst, “Machine learning regression algorithms for biophysical parameter retrieval: Opportunities for Sentinel-2 and -3 | Elsevier Enhanced Reader.”
[5] G. Camps-Valls, J. Verrelst, J. Munoz-Mari, V. Laparra, F. Mateo-Jimenez, and J. Gomez-Dans, “A Survey on Gaussian Processes for Earth Observation Data Analysis: A Comprehensive Investigation,” IEEE Geoscience and Remote Sensing Magazine, vol. 4, no. 2, pp. 58–78, 2016.
[6] M. Titsias, “Variational learning of inducing variables in sparse gaussian processes,” in Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics (D. van Dyk and M. Welling, eds.), vol. 5 of Proceedings of Machine Learning Research, (Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA), pp. 567–574, PMLR, 16–18 Apr 2009.
[7] J. Hensman, N. Fusi, and N. D. Lawrence, “Gaussian processes for big data,” in UAI, AUAI Press, 2013.
[8] A. G. Wilson, Z. Hu, R. Salakhutdinov, and E. P. Xing, “Stochastic Variational Deep Kernel Learning,” arXiv:1611.00336 [cs, stat], Nov. 2016. arXiv: 1611.00336.
Exploiting rigorously the ESA Climate Change Initiative / Copernicus Climate Change Service annual map time series allows to provide the first spatially explicit evolution of the land use and land cover change for the whole planet based on daily observation over the last three decades delivering an estimate of the major anthropogenic land use changes at continental and global scale. Gross land cover change rate as observed annually in a wall-to-wall manner at 300m resolution questions the current land change assessment compiled from FAO national statistics compilation.
The ESA CCI LC annual map time series spans 29 years from 1992 to 2020 at a spatial resolution of 300 m for the entire planet. This unique long-term land cover time series was only possible by combining the global daily surface reflectance of five different observation systems while maintaining a good consistency over time. The latter was identified as the top requirement by the climate modelling community, and innovative methods were developed accordingly for surface reflectance pre-processing and time series exploitation.
Global consistent 7-day multispectral composites at 300 m and 1 km from 1992 to 2020 were generated from the L1 radiances of the complete archives of five types of sensors: NOAA - AVHRR instrument series providing 1-km Long-Term Data Record (LTDR) v4 HRPT and LAC (1992-1999), Vegetation instruments 1 and 2 aboard SPOT 4 and 5 (1999 - 2013), ENVISAT MERIS Level 1B 300 m full (FR) and 1-km reduced (RR) resolutions, the Project for On-Board Autonomy Vegetation (PROBA-V) (2014 - 2019), and SENTINEL-3 A and B Ocean and Land Colour Instrument (OLCI) (2020).
The reprocessing of the five full mission archives allows to calibrate and correct the multispectral radiance to surface reflectance according to the same standards, to geometrically align the entire time stack at pixel level, and to upgrade or replace the different existing cloud screening algorithms for the respective missions to meet the strict land cover change detection requirements with regards to residual atmospheric or cloud shadow artefacts. On the other hand, the spatio-temporal consistency of the annual land cover maps is built in the time series exploitation method by decoupling on one side the precise land cover mapping driven by spectro-temporal signatures, and on the other side, the detection of land use land cover change from one year to another. Such an approach is supported by comprehensive typology definition based on ISO standards. The ESA CCI land cover typology was defined using the ISO 19144 Land Cover Classification System (LCCS) developed by the United Nations (UN) Food and Agriculture Organization (FAO) to describe the 22 different land categories unambiguously and to be compatible with the IPCC land classes. These standards also allow converting the land use land cover classes into Plant Functional Types distributions required by climate models.
Mapping and monitoring of land cover plays an essential role in effective environmental management and protection, estimation of natural resources, urban planning and sustainable development. Increasing demand for accurate and repeatable information on land cover and land cover changes causes a rapid development of the advanced, machine learning algorithms dedicated to land cover mapping based on Earth Observation data. A free and open access to Sentinel-2 data, characterized with high temporal and spatial resolution, increase the potential of remote sensed data to monitor and map land surface dynamics with high frequency and accuracy. The most common classification approach is to classify all classes simultaneously (called as flat approach), which not always give the high accuracy results. Despite a considerable number of published approaches towards the land cover classification, there is still a challenge to clearly separate some of land cover classes, for example grasslands, arable land or wetlands. To address these challenges, we examined the hierarchical approach towards the land cover mapping. The aim of this study is: a) to compare the results of the flat and hierarchical classification, b) to examine if a hierarchical classification of Sentinel-2 data can improve the accuracy of land cover mapping and delineation of the complex classes, c) what are the advantages and disadvantages of both approaches, and d) to assess the stability of classification models. The study is conducted in the Lodz Province in central Poland. The land cover classification is performed based on a time series of Sentinel-2 imagery acquired in 2020, using pixel-based machine learning Random Forest (RF) algorithm. The set of national reference databases such as topographic database, Forest Data Bank (BDL) was used to prepare the training and verification sampling plots. The following nine land cover classes are mapped: sealed surfaces, woodland broadleaved, woodland coniferous, shrubland, permanent herbaceous (grassland and pastures), periodically herbaceous (arable land), mosses and wetland, non-vegetated surfaces and water bodies. The classification is carried out following two approaches: 1) all land cover classes are classified together (flat classification), and 2) applying hierarchical approach by dividing classes into groups and classifying them separately. The hierarchical approach, in the first phase, classifies the most stable land cover classes and then the most problematic classes. To assess the stability of the classification model, both classifications are performed iteratively. The obtained results confirmed that the hierarchical approach gave more accurate results than standard flat method. The median of overall accuracy (OA) for hierarchical classification was higher by 3-9 percentage points compared to the flat approach. The OA for the hierarchical classification achieves 93-99% and for the flat approach 90%. Furthermore, the visual comparison of the land cover maps derived following two approaches confirmed that the hierarchical looks closer to the reality. To assess the accuracy of the final land cover maps, the independent verification was conducted using the random sampling methods, the data were compared against the Sentinel-2 mosaics and national aerial orthophoto. The result of the independent verification confirmed the higher accuracy of the hierarchical approach compared to the flat approach. For example, the mosses class achieved 100% of user’s accuracy (UA) in hierarchical classification and 82% in flat classification, so accuracy was higher by 18 percentage points. The highest difference between the producer’s accuracy (PA) was observed over the sealed surfaces class, in hierarchical PA was equal to 92% and in the flat approach PA reached 74%. These land cover classification results confirmed the potential of the hierarchical classification of Sentinel-2 data for improving the accuracy of the land cover mapping.
The research leading to these results has received funding from the Norway Grants 2014-2021 via the National Center for Research and Development - project InCoNaDa “Enhancing the user uptake of Land Cover / Land Use information derived from the integration of Copernicus services and national databases”.
Urban managers need information for urban territorial planning and monitoring. Traditional methods are based on visual interpretation of aerial photographs or field surveys. These tasks are time-consuming. Urban changes have been studied for several decades with remote sensing images (Herold et al., 2002; Hussain et al., 2013). With the democratization of access to very high spatial resolution images, urban managers need to detect and monitor the construction state of buildings in order to update their database. For instance, the municipality of Strasbourg (e.g. EuroMetropole / EMS) needs to monitor the state of buildings currently upgraded or created (ca. 250 to 350 building permits per year). This information is summarized in a database called ‘Inventory of Located Building’ (ILB) updated by experts twice per year often by ground truth survey. In order to provide information to the urban managers, the image dataset should have a very high spatial resolution, a high temporal resolution (every six months or each year) and should be associated with elevation data to detect the beginning and the end of the urban changes.
The objective of this work is to analyze these changes over the period 2017-2020 based on tri-stereoscopic Pleiades image acquired each year during the summer period. The ILB database is completed by adding information on observed changes between two dates by categorizing these changes in three classes: (1) "destruction", (2) "construction" and (3) "ongoing construction".
A supervised classification algorithm (e.g. ImCLASS; Déprez et al., 2020) is used to classify the urban building evolution. ImClass is based on a Random Forest classification of a selection of features calculated from multispectral images and from indices derived from tri-stereo Pléiades Digital Elevation Models (DSMs). The DSM-OPT webservice of the FoM@Tater and THEIA data centre is parameterized to optimize the results for urban environments. Digital Height Model (DHM) are calculated by using the NASA's Ames Stereo Pipeline software. The DHMs are validated by comparison to height derived from an airbone LiDAR survey acquired by EMS; results show a median relative difference value of less than 2m (1,70m) whatever the building heights. ImCLASS allows analysing the impacts of DHM in the classification results. The results shows that the number of false construction sites classified increases in much larger proportions than the number of correct construction sites. However, performance results show that the addition of height attribute in the classification process increase the number of correctly detected construction sites.
We examined the current status and dynamics of the vegetation in the heavily polluted Norilsk industrial region since 1985. Change detection was performed in Google Earth Engine with maximum summer NDVIs from cloud- and snow masked imagery of Landsat 5,7,8 satellites. Statistical tests were carried out on this series of data, including simple linear regression and analysis of Mann-Kendall trends; so, the analysis of changes is based on NDVI trends. To better account for the changes in tree and shrub cover, a similar analysis of NDMI was carried out. Analysis of the spatial structure of the trend showed that the maximum stable growth of both indices is observed southeast of Norilsk, in the Rybnaya River valley, the area most affected by pollution in the past. Validation via modern very high-resolution images confirms the appearance of grass and shrubs in the areas of the strong positive trend. A similar study based on MODIS / Terra-Aqua data for 2000-2020 confirmed the existence of significant NDVI trend in the Rybnaya river valley. Analysis of changes in vegetation based on very high resolution images showed that the greatest increase in NDVI is observed for vegetation in ravines and gullies. These vegetation classes were confirmed in the field in 2021 and are attributed to climate warming in recent decades.
A map of contemporary vegetation cover with 20 classes has been compiled on the basis of 2021 field data and Sentinel 2 MSI imagery of 2015-2020. Accuracy assessment confirmed moderate to good quality of the map depending on the particular class, with better recognition for less polluted areas. Comparison of the 2021 vegetation map with 1997 field descriptions has confirmed the trends for grass and shrubby vegetation growing in low terrain positions. The compiled map and field data can serve as a baseline for further vegetation monitoring in this highly changeable region.
The THEIA Land Data and Services Centre (www.theia-land.fr) is a consortium of 12 French public institutions involved in Earth observation and environmental sciences (CEA, CEREMA, CIRAD, CNES, IGN, INRAE, CNRS, IRD, Irstea, Météo France, AgroParisTech, and ONERA). THEIA has been initiated with the objective of increasing the use of space data by the scientific community and the public actors. The Scientific Expertise Centers (SEC) cluster research groups on various thematic domains. The "Urban” SEC gathers experts in multi-sensor urban remote sensing. Researchers of this group have structured their works around the development of algorithms useful for urban remote sensing using optical and SAR sensors to propose “urban products” at three different spatial scales: (1) the urban footprint, (2) the urban fabrics and (3) the urban objects. The objective of this poster is to present recent (>2019) advances of the URBAN SEC at these three scales. For the first two, the proposed methods are adapted to the geographic context of urban cities (West Cities, South Cities first and North Cities). For each spatial scale, the objective is to propose validated scientific products already available or in the near-term through the THEIA Land Service and Data Infrastructure.
At the macro-scale (urban footprint), an unsupervised automated approach is currently under development at Espace-DEV - Montpellier, and funded by a CNES project (TOSCA DELICIOSA). This method is derived from the FOTO algorithm originally developed to differentiate vegetation textures in HR and VHR satellite images (Couteron et al. 2006, Lang et al., 2019). It has been optimized and packaged into the FOTOTEX Python Open-Source library. The method is very well suited for areas with no or few urban settlement data or with quickly growing informal settlements. No training dataset is required, and the urban footprint can be identified from only one satellite image as long as it is not covered by clouds. For Western Cities where training datasets are available, the Urba-Opt processing chain based on an automatic and object-oriented approach has been deployed on HPC infrastructure and produce annually (since 2018) an urban settlement product which is available through the A2S dissemination infrastructure and on the Urban SEC of Theia land data and service Infrastructure. An ongoing research between LIVE and Espace Dev Labs focused on the interest to use the FOTOTEX result as training data in the Urba-Opt processing chain to propose an updated product of urban settlement for South cities.
At the scales of urban fabrics, products are under research activities The LIVE lab. In the context of an ongoing PhD thesis (ANR TIMES) and Tosca project (CNES 2019-2022) Sentinel-2 single-date images are used to assess two semantic segmentation networks (U-Net) that we combined using feature fusion between a from scratch network and a pre-trained network on ImageNet. Three spectral or textural indices have been added to the both networks in order to improve the classification results. The results showed a performance gain for the fusion methods. The research activities are ongoing in order to test the S1 imagery and temporal series for training in a deep architecture.
The IGN-LaSTIG - Univ. Paris Est has focused on the use of Sentinel-2 and VHR mono-temporal SPOT products to retrieve land cover information related to urban density. First, images undergo a U-net based semantic segmentation at urban object level to retrieve ‘topographic’ classes (buildings, roads, vegetation, …). Generalized information about urban fabrics is then derived out of these land cover maps thanks to another CNN architecture. Both a building density measure and a simplified Urban Atlas like land cover map are calculated. The UMR ESPACE has focused on the machine learning modeling of the evolution of urban territories of Arctic (Yakutsk) and North-Eastern Europe (Baltic States and Kaliningrad) cities since the post-Soviet period at two scales: those of the built-up area with high spatial resolution SPOT 6/7 images, and of the urban structures based on the use of Landsat 5 TM, Landsat 8 OLI, and Sentinel 2 MSI images. Environmental (urban vegetation), economic (agricultural transformation), and morphometric indexes have been developed to characterize the processes of urban restructuring (densification, renovation) and expansion of post-Soviet cities. A comparative analysis of the machine learning algorithms used was done on the South-East Baltic cities to evaluate their performance.
At the scale of urban object (3), a map of building with their functions is proposed by the TETIS laboratory. The study targets the retrieval of buildings footprint using deep convolutional neural networks for semantic segmentation, from Spot-6/7 images (1,5m spacing), on the entire France mainland. A single model has been trained and validated from 1.2k Spot-6/7 scenes and 20M images patches. The LIVE Lab has focused on the detection of urban changes from tri-stereoscopic Pléiades imagery through 2017 to 2020. A processing chain based on a Random Forest classifiers (ImCLASS) has been tested and the impact of the height attribute to detect changes has been evaluated to characterize changes into three thematic classes of changes.
Reaching land degradation neutrality (LDN) requires to maintain or enhance land-based natural capital through a pro-active focus on monitoring and planning. A key indicator for change in land-based natural capital (defined as a reasonable proxy by the UNCCD) is land cover (LC). Accurate global LC time-series are thus vital to monitor natural capital change. Although the number and quality of open-access, remotely sensed LC products is increasing, all products have uncertainties due to widespread classification errors. However, the relative magnitude of uncertainties among exiting LC products is largely unknown, which hampers their confident selection and robust use in integrated land-use planning. To close this gap, we quantified region-, time-period-, and coarse-LC class-specific data uncertainties for the 10 most widely used global LC time-series. To this end, we developed a novel multi-scale validation framework that accounts for differences in mapping resolutions and scale mismatches between the spatial extent of map grid cells and validation samples. We aimed for a fair validation assessment by carefully evaluating the quality of our validation samples with respect to landscape heterogeneity that LC products often fail to classify accurately. To address the issue, we supported the validation assessment with Landsat-based measures of cross-scale spectra similarity. The metric was computed by taking advantage of the full Landsat archive in Google Earth Engine. We base our assessment on more than 1.8 million globally integrated LC validation sites, where we mobilized around 2.8 million samples during the period 1980-2020 composed by hundreds of sampling effort of varied nature, from field surveys to crowdsourcing campaigns. Here, we will present the results of the assessment, providing insights on global and regional patterns of LC uncertainties. We found that no single product is more accurate over the others in mapping all LC classes, regions and time-periods. We will provide recommendations on the selection of fit-for-purpose LC time-series, and discuss future strategies for addressing their uncertainties in land-use planning.
Land use and land cover maps are a very important source of information in many natural resource applications to describe the spatial patterns and distribution of land cover, to delineate the extent of the area of various cover classes, as well as to perform temporal land cover change analysis and risk analysis. Information on land use and land cover and of its change over time and space is of key importance for example in policy decision making concerning environmentally or ecologically protected areas or native habitat mapping and restoration. Thematic maps of land use/cover are also linked to the monitoring desertification and land degradation, key environmental parameters pronounced in areas such as the Mediterranean basin.
Earth Observation (EO) data is an attractive solution towards obtaining thematic maps of land use land cover (LULC), due to their ability to provide inexpensively, repetitively, rapidly and even on inaccessible locations synoptic views of the land surface at a wide range of spatiotemporal scales. Nowadays, there has been a vast production of relevant operational products available characterized by a wide variety of spatial and temporal resolution, which manifests the high level of technological maturity of this technology in this domain. Yet, before such products are used in any kind of application or research investigation in any scientific field, is of major importance to evaluate the accuracy of those products.
One of these products is the European’s Space Agency (ESA) WorldCover 2021 distributed just recently. This operational product provides information on land cover on a global scale at a high spatial resolution, 10 meters, and classifies the types of land cover in 13 classes, based on the analysis of Sentinel-1 and Sentinel-2 EO datasets. To our knowledge, according to ESA’s WorldCover official site, which includes the validation report of the product, estimates the accuracy of it at almost 75%, the exact location of the areas that are included in the validation dataset are not available to the public.
The present study aims at assessing the accuracy of ESA’s WorldCover 2020 operational product for selected regions in Greece that represent a typical Mediterranean setting. Experimental sites were selected due to their cultural, economical and environmental significance in Greece, but also including as many as possible of the product’s classes. Assessment of the products’ accuracy was carried out by computing a series of statistical metrics using as reference selected locations of known land cover obtained from field-visits in the areas, drone imagery and very high resolution imagery from PlanetScope and Google Earth. Validation was developed and implemented in R programming language allowing a robust and reproducible implementation in open access software.
The results of this validation study highlight the consecutive need for assessing satellite-derived globe cover operational products, since they are powerful, low-cost and continuously upgraded tools, used for observation, change detection, policy decision making and overall land management. For this purpose, their comparison to high-resolution relevant operational products and other means for monitoring, detecting and mapping land cover could underline the importance of accurate, up-to-date products indicating the continuous upgrade for future researches. All in all, from an operational perspective, results of our study can be of particular importance particularly in the Mediterranean basin, since use of land cover products can be associated to the mapping and monitoring of land degradation and desertification phenomena which are frequently pronounced in such areas.
The Norwegian area type map of land surface resources (AR5) is a detailed and high precision map that classifies the land surface based on a set of criteria related to the land cover type and its current and/or potential uses. The AR5 map is used as a basis for various purposes, including legal issues. Therefore, it is critical that the map is kept up-to-date and precise. The updating process is costly and time consuming as it relies on manual interpretation of high-resolution aerial photographs. Manual interpretation can also overlook changes and may introduce subjective errors. Further, revisit time of aerial images in Norway is at best five years, which is not optimal for a map that would require continuous update. There is a need for improving the updating process by increasing the frequency of the updates and identifying a method of making the entire process more effective without reducing the precision of the dataset. The pan-European very high resolution (VHR) satellite images delivered at 2 m spatial resolution with high accuracy of orthorectification is a promising dataset that has spatial resolution close to the aerial images and has potentially much higher temporal frequency (annual product from 2022). At the same time, deep learning algorithms have shown superb performances in analysing high-resolution remotely sensed data. This study, therefore, explores the potential of using deep learning algorithms on the pan-European VHR image mosaics in dealing with the limitations of the AR5 updating process. The proposed application uses the VHR of 2018 level 3 with the spatial resolution of 2 m over the entire land area of Norway. The high spatial resolution of the AR5 requires that such images must be analysed at least at pixel resolution to keep up with the spatial detail of the map. Semantic segmentation, that is known to classify images at pixel level, is therefore the optimal approach to use in this context. Using the AR5 map as training dataset and as reference against which changes are detected, a deep learning algorithm based on the U_net model is implemented to achieve semantic segmentation of the images into the different classes of the AR5 map. Different approaches of training the U_net model, including partial transfer learning are explored. The regional diversity of Norway is considered, and the country is divided into regions of varying topography, latitude and climate (land cover/use). One model is then trained for each region. Test data are kept separate from the training data for robust evaluation of the models. The trained and evaluated models are finally used for predicting the area types. The segmentation results are then compared with the existing AR5 map to detect anomalies so that areas for potential update are detected.
Global land cover and land cover change maps derived from Earth Observation techniques are regularly released at multiple scales. Their endorsement by users depends in part on their quality. The Committee on Earth Observation Satellites (CEOS) of the Group for Earth Observation (GEO) plays a crucial role in coordinating the validation process [1], [2] and in ensuring that the suite of LC products is ultimately validated operationally and systematically by independent bodies.
These standards have been applied commonly to validate global land cover products [3]. The stratified random sampling [4]–[7] often used to validate the global LC products at moderate spatial resolution (250 - 1,000 m) [8]–[11] has been recognized as the most efficient sampling strategy [1], while more diverse samplings are used for validations of global LC maps at high resolution (10 - 30 m) [12]–[16]. LCC validation is still in its early stages. The exercise is still challenging because the rarity of a change event complexifies the omission rate estimation among large unchanged areas [1]. The availability of reference data decreases with time, and the poor correspondence between observation dates, i.e. validation versus detection, is a source of uncertainty. Therefore, stratified sampling is used in space to meaningfully represent areas with high rates of LCC and, in time to account for the availability of reference data. In benchmarking, as in round-robin activities, the additional objective is to highlight the performance of one product relative to others to select the best results or target improvements. Formal standards in the field of benchmarking have yet to be defined.
Building on the GlobCover experience, the CCI Medium Resolution Land Cover (MRLC) has developed a nested sampling scheme that is adaptable to multiple scales to validate land cover and land cover change. Based on the two-stage systematic stratified cluster sampling of the Joint Research Centre (JRC) TREES dataset designed on a lat/long geographic grid, 2600 pre-segmented primary sampling units (PSUs) were visually interpreted on near-2010 very high resolution (VHR) images by an international network of LC experts with regional expertise and a deep understanding of the land cover legend. Land cover changes were assessed between 2010, 2005 and 2000, and updated annually from 2015 to 2020 [17]. Spatio-temporal stratifications were made to the original sampling to address omissions of land cover change. The LC and LCC assessments applied here can be called validations, sensu stricto, as complete independence between the calibration and validation data is ensured and avoids bias [1]. Finally, the CCI MRLC project contributed to benchmarking by testing a sampling strategy increasing the discrimination between binary maps within the recognized LC assessment guidelines [18].
Here, we reflect on the lessons learned from the different types of the GlobCover and CCI MRLC validation experiments: the development of a scalable validation framework designed to annually assess the quality of LC and LCC map at 10, 30 and 300 m scales and the benchmark of LC prototypes including multiple LC classes. We discuss potential directions for the development of land cover and land cover change validation at various scales.
[1] A. H. Strahler et al., “2006_Strahler et al._Unknown.pdf,” no. 25, 2006.
[2] P. Olofsson, G. M. Foody, M. Herold, S. V. Stehman, C. E. Woodcock, and M. A. Wulder, “Good practices for estimating area and assessing accuracy of land change,” Remote Sens. Environ., vol. 148, pp. 42–57, 2014, doi: 10.1016/j.rse.2014.02.015.
[3] S. V Stehman and R. L. Czaplewski, “Design and Analysis for Thematic Map Accuracy Assessment - an application of satellite imagery,” Remote Sens. Environ., vol. 64, no. January, pp. 331–344, 1998, doi: 10.1016/S0034-4257(98)00010-8.
[4] J. Scepan, G. Menz, and M. C. Hansen, “The DISCover validation image interpretation process,” Photogramm. Eng. Remote Sens., vol. 65, no. 9, pp. 1075–1081, 1999.
[5] P. Mayaux et al., “Validation of the global land cover 2000 map,” IEEE Trans. Geosci. Remote Sens., vol. 44, no. 7, pp. 1728–1739, 2006.
[6] P. Defourny, P. Mayaux, M. Herold, and S. Bontemps, “Global land-cover map validation experiences: toward the characterization of quantitative uncertainty,” in Remote Sensing of Land Use and Land Cover: Principles and Applications, 2012.
[7] N. E. Tsendbazar et al., “Developing and applying a multi-purpose land cover validation dataset for Africa,” Remote Sens. Environ., vol. 219, no. March, pp. 298–309, 2018, doi: 10.1016/j.rse.2018.10.025.
[8] T. R. Loveland et al., “Development of a global land cover characteristics database and IGBP DISCover from 1 km AVHRR data,” Int. J. Remote Sens., vol. 21, no. 6–7, pp. 1303–1330, 2000.
[9] E. Bartholomé and A. S. Belward, “GLC2000: A new approach to global land cover mapping from earth observation data,” Int. J. Remote Sens., vol. 26, no. 9, pp. 1959–1977, 2005, doi: 10.1080/01431160412331291297.
[10] P. Defourny et al., “GlobCover: A 300M Global Land Cover Product for 2005 Using ENVISAT MERIS Time Series,” Proc. ISPRS Comm. VII Mid-Term Symp., no. May 2006, pp. 8–11, 2007.
[11] O. Arino, P. Bicheron, F. Achard, J. Latham, R. Witt, and J. L. Weber, “GlobCover: The most detailed portrait of Earth,” Eur. Sp. Agency Bull., vol. 2008, no. 136, pp. 24–31, 2008.
[12] Y. Zhao et al., “Towards a common validation sample set for global land-cover mapping,” Int. J. Remote Sens., vol. 35, no. 13, pp. 4795–4814, 2014.
[13] C. Li et al., “The first all-season sample set for mapping global land cover with Landsat-8 data,” Sci. Bull., vol. 62, no. 7, pp. 508–515, 2017, doi: 10.1016/j.scib.2017.03.011.
[14] P. Gong et al., “Finer resolution observation and monitoring of global land cover: First mapping results with Landsat TM and ETM+ data,” Int. J. Remote Sens., vol. 34, no. 7, pp. 2607–2654, 2013.
[15] P. Gong et al., “Stable classification with limited sample: transferring a 30-m resolution sample set collected in 2015 to mapping 10-m resolution global land cover in 2017,” Sci. Bull., vol. 64, no. 6, pp. 370–373, 2019, doi: 10.1016/j.scib.2019.03.002.
[16] J. J. Chen et al., “Global land cover mapping at 30m resolution: A POK-based operational approach,” ISPRS J. Photogramm. Remote Sens., vol. 103, pp. 7–27, 2015, doi: 10.1016/j.isprsjprs.2014.09.002.
[17] UCLouvain and ECMWF, “Copernicus Climate Change Service. ICDR Land Cover 2016 - 2019. Product Quality Assessment Report,” 2020.
[18] C. Lamarche et al., “Compilation and validation of sar and optical data products for a complete and global map of inland/ocean water tailored to the climate modeling community,” Remote Sens., vol. 9, no. 1, 2017, doi: 10.3390/rs9010036.
Land use, land-use change, and forestry (LULUCF) is a greenhouse gas inventory sector that evaluates greenhouse gases changes in the atmosphere from land use and land use changes. It is key information for major reports of the Intergovernmental Panel on Climate Change (IPCC). Information about LULUCF is reported annually to the IPCC by each reporting state, and each reporting state uses available sources of information about land use. Hence, different methodologies with different data are used. LULUCF data from Czechia are reported from cadastral data, its abilities to detect land use changes are limited (Štych et al. 2020, Pazúr et al. 2017).
This study focuses on reporting LULUCF information from Earth observation data. The main goal is to classify Sentinel-2 multispectral data for purpose of LULUCF using the Google Earth Engine. The categories used in LULUCF are: Settlements, Cropland, Forestland, Wetlands and Other land. We classified 2 NUTS2 regions: Southeast (CZ06) and Central Moravia (CZ07) in Czechia in 2018.
The first step was preparing a classified mosaic. The mosaic was made from images with masked cloudiness. For each pixel, the median of the cloud-less values was determined as the final value. This procedure was chosen for the values of all S-2 bands with a resolution at least 20 m. Than two more bands were added to the classified raster. The first was the variance of NDVI values in the period from May to October. This band helps to distinguish surfaces such as buildings (small variance) from surfaces with dynamically changing NDVI (arable land). The second band was elevation dataset of SRTM.
The Random forest method was used for classification. Training polygons were created by two methods. The first method is the semi-automatic creation of training polygons from the CORINE Land Cover vector layer for the year 2018. From the CLC2018 polygons, core areas were created first using the inner buffer of 100 m. Inside these areas, training polygons of a circle with a diameter of 80 m were randomly generated. This method of creating training polygons did not include all kinds of classes, e.g. Other land (photovoltaic power plant). These training polygons for these kind of surfaces were manually added. From point of view accuracy of classification, a combination of these parameters was tested: the Number of Trees (NT) ranged 50 - 400, the Variables per Split (VPS) ranged 1 – 6, and the Bag Fraction (BF) ranged 0.1 – 0.5. Totally 450 combinations of different parameters were tested. For each combinations Cohen's kappa was calculated by control data. The classification with the highest accuracy with an overall accuracy of 89.1% (Cohen's kappa is 0.84) had the combination: NT = 150, VPS = 3, BF = 0.1. The most dominant LULUCF class in the study area in 2018 was Cropland with 42.78 % of the overall area. Forestland covered more than a third (35.4 %), Grasslands15.39 %, Settlements had 4,66 % and Other land and Wetlands less than 1 % (0.96 % for Other land and 0.80 % for wetlands).
From point of LULUCF view, the combination of Sentinel-2 data with cloud-based computing (Google Earth Engine) seems to be very perspective and acceptable for stakeholders.
Title : Towards operational surface water extent estimation from C-band sentinel-1 SAR imagery.
Author : Jungkyo Jung, Heresh Fattahi, Gustavo HX Shiroma
Jet Propulsion Laboratory. California Institute of Technology, Pasadena, CA,USA
The extent and the location of surface water are very essential information related to human and climate activity. The global maps delineating surface water extent have been mainly produced from the optical imagery due to its high accuracy and robustness. With the rich archive of the optical images accumulated over the last 30 years, the long-term changes of the permanent water may be fully understood. However, the ability of the optical sensors to monitor the temporal fluctuations of water extent is limited by cloud coverage and sunlight. On the other hand, the active sensors utilizing the microwave signal, such as the Sentinel-1 C-band Synthetic Aperture Radar (SAR), potentially can monitor the dynamics of surface water extent regardless of the weather conditions.
In general, at the radar frequency and at the range of incidence angle of the Sentinel-1A/B’s radar, the specular reflection of the microwave signal over open water leads to darker appearance of water than land in the SAR images.
The contrast between water and land has led to an surface water extent estimation by thresholding the SAR backscatter images Chini et al. [2019]. However, in practice the simple assumption of darker water than land may be violated in different situations such as when the wind-driven backscatter results in bright water or when the flat land surfaces (e.g., arid desert regions) reflects most of the microwave signal away from the radar line of sight and therefore leading to dark land. Bright water and dark land result in under and over estimation of surface water extent respectively.
In order to overcome these limitations, several studies have tried to use ancillary data or image processing algorithms to improve water estimation from SAR data. Twele et al. [2016] proposed a thresholding-based algorithm for flood-mapping from the Sentinel-1 data. Their algorithm starts with radiometric terrain corrected (RTC) backscatter images, estimates a global threshold for the entire scene and improves the estimation using fuzzy logic-based classification refinement, height above the nearest drainage (HAND) index and region-growing. Despite significant improvements compared to the simple thresholding algorithms, the results from Twele algorithm still suffers from bright water and dark land in many regions of the world.
We build on Twele algorithm and present a new algorithm in which we further improve the surface water extent estimation by modifying the tile detection approach for threshold estimation, modifying the fuzzy logic classification with additional rules introducing additional ancillary layer such as land cover maps and existing permanent water masks and by developing a new algorithm to refine the estimations in a second iteration based on the bimodality test.
We evaluate the performance and thematic accuracy of the automatic processing chain for various sites covering surface water worldwide. We define the reference water from water occurrence maps produced by Pekel et al. [2016] quantifying changes in global surface water over the past 32 years at 30-metre resolution from Landsat imagery. The preliminary result of the verification suggests that the surface water detection processor is able to achieve satisfying classification results with user accuracies between 82.0% and 99.1% and the producer accuracy from 93.9 % to 99.7% over areas with stable water extent close to permanent water. In order to further evaluate the estimation accuracy over regions with more dynamic water extent we use independent estimates from Harmonized Landsat-8 and Sentinel-2 data as well as high resolution optical imagery.
[reference]
Chini, M., Pelich, R., Pulvirenti, L., Pierdicca, N., Hostache, R., & Matgen, P. (2019). Sentinel-1 InSAR coherence to detect floodwater in urban areas: Houston and Hurricane Harvey as a test case. Remote Sensing, 11(2), 107.
Twele, A., Cao, W., Plank, S., & Martinis, S. (2016). Sentinel-1-based flood mapping: a fully automated processing chain. International Journal of Remote Sensing, 37(13), 2990-3004.
Pekel, J. F., Cottam, A., Gorelick, N., & Belward, A. S. (2016). High-resolution mapping of global surface water and its long-term changes. Nature, 540(7633), 418-422.
In our research, we fulfilled a complete land cover classification process based on Sentinel-2 images. The project area of interest was located in a cross-border territory of Hungary, Slovakia, Romania, and Ukraine. The project was completed in the frames of a cross-border cooperation program, the HUSKROUA project.
Our aim was to identify main land cover classes in the area, which was challenging due to the following factors: 1.) the large area of the cross-border area of interest which covered several Sentinel-2 tiles altogether; 2.) the local differences in phases of phenology and reflectance of specific land cover types; 3.) the frequent cloud cover conditions over the mountainous regions of the area of interest. Due to point 1.), the area of interest was divided into four parts and the land cover classification and accuracy assessment processes were performed in these four parts separately. The whole area of interest covered an area of 50110 square kilometers. We selected cloud-free or mostly cloud-free images from between March and October 2021 and created image mosaics of the selected tiles. The best cloud-free or mostly cloud-free images were selected from May and September 2021.
During the analysis, we segmented the images mainly based on the 10-m bands and an edge detection layer generated from the bands. The classification was performed mainly based on visible bands, NIR, SWIR, and two spectral indices generated from the bands: Normalized Difference Vegetation Index (NDVI) and Modified Normalized Difference Water Index (MNDWI). The following classes were used and successfully identified in the area: Built-up, Agriculture, Grass, Forest, Single group of trees and Other.
The differences in phenology and reflectance turned out to be a limitation regarding local variation but were useful regarding different dates. The accuracy assessment of the classified images was performed by a QGIS plugin developed for this special purpose. The overall accuracy of the classification in 4 parts of the area were between 90 and 92%. In some test areas we also used LIDAR data and RGB orthoimages, in which cases we achieved up to an overall accuracy of 96%. The classification was capable of identifying the main land cover types in the area successfully.
This research was supported by the project titled as Complex flood-control strategy on the Upper-Tisza catchment area - DIKEINSPECT”, with a project number HUSKROUA/1901/8.1/0088.
This abstract aims to highlight how a private company developed and implemented a lightweight, robust, and flexible process to automate the generation land cover maps by fusing multiple data sources, enabling a public administration to reliably and frequently update its urban-rural landscape representation.
To deal with the regional environmental, climatic, and territorial management challenges, authorities effectively need precise and regularly updated representation of the fast-changing urban-rural landscape. In 2018, the WALOUS project was launched by the Public Service of Wallonia (SPW), Belgium, to develop reproducible methodologies for mapping Land Cover (LC) and Land Use (LU) (Beaumont et al. 2021) on the Walloon region. The first edition of this project was led by a consortium of universities and research center and lasted 3 years. In 2020, the resulting LC and LU maps for 2018 (Bassine et al. 2020) updated the outdated 2007 map (Baltus et al 2007) and allowed the regional authorities to meet the requirements of the European INSPIRE Directive. However, although end-users suggested that regional authorities should be able to update these maps on a yearly basis according to the aerial imagery acquisition strategy (Beaumont et al. 2019), the Walloon administration quickly realized that it does not have the resources to understand and reproduce the method because of its complexity and relatively concise handover. A new edition of the WALOUS project started in 2021 to bridge those gaps.
AEROSPACELAB, a private Belgian company, was selected for WALOUS’s 2nd edition thanks to its promise to simplify and automate the LC map generation process while ensuring a deep appropriation of the solution by the local authorities. This approach would allow the SPW to reliably and frequently update the LC map of Wallonia. This approach entails two crucial parts: a robust and automated model, and a deep involvement of the regional administration.
For the solution, an approach revolving around a Deep Learning (DL) segmentation model was chosen. Compared to traditional approaches, DL models do not require as much features engineering. This played favorably in the adoption of the solution by the local authorities. The segmentation model is based on the DEEPLAB V3+ architecture (Chen et al. 2017) (Chen et al. 2018) and was implemented with the open-source DETECTRON2 framework (Wu et al. 2019) which allows for rapid prototyping. DEEPLAB V3+ main distinguishing features are its use of atrous convolutions and atrous spatial pyramid pooling which address the problem of segmenting objects at multiple scales without being too costly at inference time. This is all permitted thanks to the atrous convolutions which widen the fields-of-views without increasing the kernel’s dimensions. Slight technical adjustments have been made to this architecture to tailor it to the task: on the one hand, the segmentation head was adjusted to comply with the 11 classes representing the different ground covers, on the other hand, the input layer was altered to cope with the 5 data sources.
Data fusion was a key aspect of this solution as the model was trained on various sources with different spatial resolutions:
• high-resolution aerial imagery with 4 spectral bands (Red, Blue, Green, and Near-Infrared) and a ground sample distance of 0.25m;
• digital terrain model obtained via LiDAR technology; and
• digital surface model derived from the aforementioned high-resolution aerial imagery by photogrammetry.
The pre-trained model was initially trained using WALOUS’s previous edition LC map (artificially augmented), and then a fine-tuning phase was performed on a set of highly detailed and accurate LC tiles that were manually labelled.
Several additional data sources and model architectures were considered and prototyped, such as the POINTREND extension (Kirillov et al. 2020), and a ConvLSTM to segment satellite imagery with high temporal resolution such as Sentinel-2 (Rußwurm et al. 2018) (Belgiu et al. 2018). All had for requirements to segment Wallonia in 11 classes ranging from natural – grass cover, agricultural parcel, softwood, hardwood, and water – to artificial – artificial cover, artificial construction, and railway – covers. The final model achieves an overall accuracy of 92.29% on the test set consisting of 1710 points photo-interpreted. Figure 1 shows the high-level overview of the solution’s architecture, and Figure 2 gives an overview of the various predictions made by the model at a spatial resolution of 0.25m/pixel. Besides updating the LC map, the solution also compares the new predictions with the previous LC map and derives a change map highlighting, for each pixel, the LC transitions that may have arisen during the two studied years.
Regarding the appropriation of the solution by the SPW, AEROSPACELAB managed to involve the local authorities into the development of the solution thanks to an agile approach and an iterative process. Indeed, each new iteration of the solution was presented to the end-users of the administration, and, through a feedback loop, their remarks and suggestions were taken into account to prototype the next iteration of the solution. While being time consuming, this approach ensures to the local administration a better understanding of the challenges related to the task, as well as a better appropriation of the implemented solution. Furthermore, those monthly presentations served as a pulse check on the project’s status for the local authorities who otherwise would have remained blindfolded as it is often the case in such contracts.
A common hindrance to appropriation is the use of licensed software. Hence, the decision was made to only use open-source software. When multiple options were available, the one with the largest community of users was selected as a high adoption often leads to better support.
Furthermore, as part of the handover, several interactive workshops have been organized. These illustrated to the regional authorities how to use the model and interpret its results so that can, independently, update the next LC maps of Wallonia as soon as new data are made available. This ability to generate a new version of the LC map as soon as the data is available is crucial as the period between the data input and the diffusion date of the map often determines its popularity and use by the end-users.
In conclusion, the public-private partnership led to publication of the new LC maps for 2019 and 2020 in early 2022. But moreover, the public administration will be trained to be able to make use of the AI algorithm with each new annual aerial images.
-----------------------------------------
references:
Baltus, C.; Lejeune, P.; and Feltz, C., Mise en œuvre du projet de cartographie numérique de l’Occupation du Sol en Wallonie (PCNOSW), Faculté Universitaire des Sciences Agronomiques de Gembloux, 2007, unpublished
Beaumont, B.; Stephenne, N.; Wyard, C.; and Hallot, E.; Users’ Consultation Process in Building a Land Cover and Land Use Database for the Official Walloon Georeferential. 2019 Joint Urban Remote Sensing Event (JURSE), Vannes, France, 1–4. doi:10.1109/JURSE.2019.8808943
Beaumont, B.; Grippa, T.; Lennert, M.; Radoux, J.; Bassine, C.; Defourny, P.; Wolff, E., An Open Source Mapping Scheme For Developing Wallonia's INSPIRE Compliant Land Cover And Land Use Datasets. 2021.
Bassine, C.; Radoux, J.; Beaumont, B.; Grippa, T.; Lennert, M.; Champagne, C.; De Vroey, M.; Martinet, A.; Bouchez, O.; Deffense, N.; Hallot, E.; Wolff, E.; Defourny, P. First 1-M Resolution Land Cover Map Labeling the Overlap in the 3rd Dimension: The 2018 Map for Wallonia. Data 2020, 5, 117. https://doi.org/10.3390/data5040117
Chen, L.-C., Papandreou, G.; Schroff, F.; Adam, H., Rethinking Atrous Convolution for Semantic Image Segmentation. Cornell Univeristy / Computer Vision and Pattern Recognition. December 5, 2017.
Chen, L.-C., Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H., Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. ECCV. 2018
Wu, Y.; Kirillov, A.; Massa, F.; Lo, W.-Y.; Girshick, R., Detectron2. https://github.com/facebookresearch/detectron2. 2019.
Kirillov, A.; Wu, Y.; He, K.; Girshick, R., PointRend: Image Segmentation as Rendering. February 16, 2020.
Rußwurm, M.; Korner, M., Multi-Temporal Land Cover Classification with Sequential Recurrent Encoders. International Journal of Geo-Information. March 21, 2018.
Belgiu, M.; Csillik, O., Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis. Remote Sensing of Environment. 2018, pp. 509-523.
Substantial land cover and land use change (LCLUC) occurred in Central Europe after the Autumn of Nations in 1989 and the expansion of the European Union (EU) in 2004 and 2007. Currently, a few studies exist reporting these changes at a regional scale (Griffiths et al., 2014; Munteanu et al., 2015). However, in order to fully understand the drivers and environmental implications of these land conversions, more spatially detailed information on land cover and land cover change trajectories is needed.
The solution for this need can be seen in the remote sensing methods for mapping land cover and land cover change at regional scales. Open access to increasing amounts of medium resolution satellite imagery from systems like Landsat and the emergence of high-performance cloud computing infrastructures like Google Earth Engine allows for advancing the mapping methodology tremendously.
In this study, we aim to address this need for information by developing an approach for generating a multi-year record of land cover and land cover change at regional scales. We showcase our tool by generating a set of temporally consistent annual maps of land cover for Central Europe covering a 35-year period from 1985 to 2020. Moreover, made effort to identify the spatial patterns of land cover change and its change.
Our study area spread across four countries in the Central Europe: Czechia, Hungary, Poland and Slovakia that joined the European Union within the time span of our study. We focused on eight major land cover categories: artificial land, croplands, forest, shrublands, grassland, barren land, wetland and water and four land cover changes (1) from croplands to semi-natural vegetation (mostly land abandonment) (2) shrublands to semi-natural vegetation wooded (mostly land abandonment), (3) grasslands to semi-natural vegetation (mostly land abandonment), (4) croplands and vegetation classes to artificial land (mostly urban sprawl).
For mapping purposes, we used USGS Tier-1 Landsat surface reflectance products (product’s code here) available on the Google Earth Engine platform acquired between 1985 and 2020. We restricted our data set to the acquisitions from the average vegetation season starting from the 135th day of the year (1st of May) and ending at the day 288th (31st of October). We processed the imagery by screening out the clouds, cloud shadows and snow with the use of CFMASK. We also normalized the OLI reflectance to match the values from the TM and ETM+ sensors. In total, we used 20,310 images across 90 WRS-2 footprints. On average, this yielded six images per year per single footprint (minimum one and maximum 11). For each year of our time frame, we used the Landsat data to calculate 84 classification metrics including (1) the summary metrics for each spectral band: maximum, minimum, mean, median, standard deviation, and the 25th and 75th percent quantile; (2) six indices: Normalized Difference Vegetation Index (NDVI), Normalized Burn Ratio (NBR), Bare Soil Index (BSI), Brightness, Greenness, and Wetness from a Tasseled Cap Transformation.
We used all LUCAS datasets covering years: 2006, 2009, 2012, 2015 and 2018 as reference for both training and validation. We used all eight main categories of land cover differentiated in the LUCAS: artificial land, cropland, woodland, shrubland, grassland, bare land, water, and wetland, plus seventy-six sub-classes. We selected plots where the land cover proportion was equal to 100% and for which the field-observed GPS location was less than 30m away from the central point. In order to validate four specific types of land cover change (name them here), we randomly selected 80 pixels within these categories and 50 pixels within each stable land cover category. We visually interpreted these validation samples using Landsat imagery and a time series of spectral indices.
For mapping the land cover, we used a two-step approach to supervised classification implemented in the environment of Google Earth Engine. First, we used the Random Forest (RF) classifier to generate the variable importance for our input variables selection consisting of the 20 best metrics. We selected two methods of variable importance assessment: (1) Mean Decrease Accuracy (MDA) and (2) Mean Decrease Gini (MDG) statistics. Second, we used an ensemble approach and three non-parametric, machine learning algorithms Random Forest (RF), Support Vector Machines (SVM) and CART, to generate the annual maps. With each classifier, we generated a set of 35 annual maps. In total, we received 140 maps of land cover 140 maps depicting the classification probability.
We used our maps to create the time series of land cover information spanning between 1985-2020. We analysed such data for land cover change detection (stable areas) and found the impossible trajectories of change (i.e. changes with too high diversity; misclassification). In order to do that, we first generated a map of stable eight land cover categories and later focused on detecting changes in three classes: croplands, shrublands and grasslands. We specified four change processes: (1) croplands to seminatural vegetation (herbaceous and woody), (2) grassland to seminatural vegetation (herbaceous and woody), (3) shrublands to forests and (4) vegetation to artificial lands.
We evaluated the accuracy of land cover and land cover change maps obtained for each year and then land cover change products. The average overall accuracy of the land cover maps was about 90%. We obtained the highest user and producer accuracies above 95% for forests and water, with artificial land and croplands slightly lower about 80 to 86%. Classification uncertainty was lower in more heterogeneous landscapes, e.g., northern Carpathians. We based the accuracy assessment on the stratified random sampling for the classification accuracy assessment, with strata based on the land cover categories. We did not implement proportional allocation and increased the sample size for rarer classes.
Over 35-years, the forest cover and proportion of artificial lands in Central Europe have increased. At the same time, the croplands and grasslands areas declined.
We conclude that our approach provides a useful template for large scale mapping and assessment of land cover dynamics. Our land cover dataset can be used for various potential applications and many areas of environmental impact assessment and management.
Acknowledgements
We gratefully acknowledge support by the National Science Centre, project TRACE [project no. 2018/29/B/ST10/02979]. Global Land Programme contributing project.
High quality field reference data are particularly essential in modern machine learning agri-environmental and crop monitoring algorithms (Elmes et al 2020). Not only to train, but also to validate land cover maps and estimate crop areas (Olofsson et al 2014, Stehman & Foody 2019). Such field information is available through the Pan-European LUCAS survey campaign conducted on a three-yearly basis since 2006 (Eurostat 2021). Recent research have improved the update of this data for EO applications, through semantic product and nomenclature comparison (Buck et al 2015), harmonizing the LUCAS micro data sets from 2006-2018 (d’Andrimont et al 2020) and using the LUCAS Copernicus survey module to transfer the essential LUCAS point information to polygon reference labels (d’Andrimont et al 2021).
Extending these approaches we have applied a Convolutional Neural Network (CNN) approach, using the Python programming language and the libraries Keras and Tensorflow.
The CNN model is based on an operational crop photo assessment workflow that is applied to optimize field data campaigns as part of the Common Agricultural Policy (CAP) monitoring for the European Commission (Haub 2019). It was based by that time on more than 70,000 geotagged and labeled crop photos from on-site inspections conducted by us between 2012-2019 in Germany with 45 epochs, resulting in an overall model accuracy of 90%. Based on this CNN, a web-based prototype has been set up, which is available for demonstration here.
We have further developed and applied this approach to identify and label crops in the applicable LUCAS field photos (cardinal direction photos looking N-E-S-W) from 2006-2018. Using this information we filtered and qualified the LUCAS field information to enhance the “machine readability” and application potential for Earth Observation and Sentinel satellite data-based classifications. This includes the annotation of land cover in the LUCAS point vicinity and construction of reference polygons extending into similar land cover directions. The approach is currently extended to encompass further crop classes at the European scale using the LUCAS database. In our talk we will present the output of our current work to enhance the LUCAS crop information for national and continental EO application. Our work is embedded into a study to analyse and improve the transferability of satellite-based Artificial Intelligence models in space and time (Uebersat).
### References
Buck, Oliver, Carsten Haub, Sascha Woditsch, Dirk Lindemann, Luca Kleinewillinghöfer, Gerard Hazeu, Barbara Kosztra, Stefan Kleeschulte, Stephan Arnold und Martin Hölzl. 2015. Task 1.9 - Analysis of the LUCAS nomenclature and proposal for adaptation of the nomenclature in view of its use by the Copernicus land monitoring services. Service contract report No. 3436/B2015/R0-GIO/EEA.56166. Copenhagen: European Environment Agency (EEA). http://land.copernicus.eu/user-corner/technical-library/LUCAS_Copernicus_Report_v22.pdf.
d’Andrimont, Raphaël, Momchil Yordanov, Laura Martinez-Sanchez, Beatrice Eiselt, Alessandra Palmieri, Paolo Dominici, Javier Gallego, u. a. 2020. Harmonised LUCAS in-situ data and photos on land cover and use from 5 tri-annual surveys in the European Union. arXiv:2005.05272 [stat] (11. Mai). http://arxiv.org/abs/2005.05272 (zugegriffen: 16. September 2020).
d’Andrimont, Raphaël, Astrid Verhegghen, Michele Meroni, Guido Lemoine, Peter Strobl, Beatrice Eiselt, Momchil Yordanov, Laura Martinez-Sanchez und Marijn van der Velde. 2021. LUCAS Copernicus 2018: Earth-observation-relevant in situ data on land cover and use throughout the European Union. Earth System Science Data 13, Nr. 3 (19. März): 1119–1133. doi:10.5194/essd-13-1119-2021, .
Eurostat. 2021. LUCAS - Land use and land cover survey. 15. November. https://ec.europa.eu/eurostat/statistics-explained/index.php?title=LUCAS_-_Land_use_and_land_cover_survey.
Elmes, Arthur, Hamed Alemohammad, Ryan Avery, Kelly Caylor, J. Ronald Eastman, Lewis Fishgold, Mark A. Friedl, u. a. 2020. Accounting for Training Data Error in Machine Learning Applied to Earth Observations. Remote Sensing 12, Nr. 6 (23. März): 1034. doi:10.3390/rs12061034, .
Haub, C., 2019. 2 years IACS monitoring pilots - selected German cases: approach, results and way ahead. https://marswiki.jrc.ec.europa.eu/wikicap/images/f/f9/05_new_v2.4_2019-11-26_EFTAS.pdf. Available at: https://marswiki.jrc.ec.europa.eu/wikicap/index.php/Prague_2019 [Accessed November 22, 2021].
Olofsson, Pontus, Giles M. Foody, Martin Herold, Stephen V. Stehman, Curtis E. Woodcock und Michael A. Wulder. 2014. Good practices for estimating area and assessing accuracy of land change. Remote Sensing of Environment 148 (Mai): 42–57. doi:10.1016/j.rse.2014.02.015, .
Pflugmacher, Dirk, Andreas Rabe, Mathias Peters und Patrick Hostert. 2019. Mapping pan-European land cover using Landsat spectral-temporal metrics and the European LUCAS survey. Remote Sensing of Environment 221 (Februar): 583–595. doi:10.1016/j.rse.2018.12.001, .
Stehman, Stephen V. und Giles M. Foody. 2019. Key issues in rigorous accuracy assessment of land cover products. Remote Sensing of Environment 231 (September): 111199. doi:10.1016/j.rse.2019.05.018, .
Crop classification task is still have a big number unsolved problems that require new methods and instruments for the achieving maximum reliability of mapping. Some of these problems can partly be solved by use of state-of-the-art computer vision techniques that give possibility to build very accurate land cover and crop type maps. Such methods already showing a good performance on the small experiment sites all around the world. However, the use of such methods on the regional or even country level is still very challenging task. And this challenge even not in the use of big number of computational resources. Modern convolutional deep learning methods are require training data in special format. Usually, it is manually fully labeled squares of fixed size.
In terms of data collection forming for the crop classification task, the biggest problem is impossibility of accurate photointerpretation of crop features for the training data labelling. The real-life crop profiles are having very high variativity of features and analysis of NDVI sequences or just visual analysis of true color or any other satellite bands combination will not be accurate or reliable. So, the only way to form a good training or validation dataset is ground survey on the territory of interest’ roads. This fact also creates another problem – the real-life distribution of crop types on land is not uniform. The resulting data sets are very unbalanced in terms of machine learning. It is common situation, when most ground truth samples are representing majoritarian crop classes, while minoritarian classes can be represented only in a few samples. In the pixel-based classification such problem can be fixed in various ways. The most common method is usage of fixed number of pixels read from satellite data for each field to make the distribution of pixels in each classes uniform and balanced. Or another way is extending the number of pixels for minoritarian classes by simulation of new values based on the available with addition of random noise to these values for dispersion control and overfitting avoidance. However, such approaches are not working with convolutional neural networks. If ground truth data collection contains thousands of fields, it is possible to estimate millions of pixels from moderate or high resolution satellite data for pixel-based classification. But in the same way they can cover only a few hundreds of fully labeled squares for the segmentation task. This why, in the task of crop classification, the development of robust ground truth data simulation methods is very promising.
In this work we are presenting a new method of synthetic training data generation for the crop classification task based on the deep Generative Adversarial Network (GAN). This method uses computer vision approach – image to image translation. We trained the GAN neural network based on the available ground truth data to generate time-series of VV and VH polarization bands of Sentinel-1 based on the segmentation masks with 256x256 size. The resulting model give us possibility to simulate realistic images with different distributions of minoritarian classes. Combination of simulated by this method data with real data gave us possibility to estimate better recognition of minoritarian classes on crop classification maps build with use of U-net - deep convolutional neural network architecture.
Latest EO applications promote digitalization in agriculture, specifically the collection of agroecological data in terms of Big Data through remote sensing, sensor networks, and other geospatial data. The use of remote sensing applications in agriculture is manifold and covers a broad spectrum of topics from crop identification, biomass estimation to assessments of soil properties like pH, moisture, and clay content. Delivered as digital solutions with near real-time processing, remote sensing-based information can be used as a tool for decision making at multiple scales, from subplots (e.g., management zones) to regional and global scales for farmers, agribusiness, scientists, and policy makers.
However, the valorization of remote sensing data in agriculture currently reveals several challenges. The development of remote sensing products is closely intertwined with sufficient access to ground truth information to improve product quality, accuracy, and reliability of products and thus is relevant for the acceptance of such applications.
The large variety of remote sensing data with diverse properties is often countered by limited access to ground truth information. There is a great need to connect agricultural research networks and databases to facilitate information access and flow between different disciplines in the context of sustainable and future-oriented agriculture. The need for FAIR field data and closer linkage of remote sensing data can improve the predictive value of remote sensing products for sustainable natural resource use.
With the InsituDB we have launched a complete digital data framework to capture ground truth information. Starting with offline data acquisition in the field, asynchronous transfer of the data into the data portal and processing of the data via standardized communication protocols, as well as the final dissemination of information via standardized open-access web service interfaces and visualizations.
Data acquisition in the field is divided into three distinct, independent, but mergeable surveys covering the methodological compartments - biophysical, soil and spectral parameters of agricultural production. The collection strategy is oriented towards international EO initiatives, like JECAM- or ESA-FRM4Veg. The sampling design of this framework enables data collection by a broader community, such as farmers, students, researchers and interested citizens. In addition, the use of the cross-platform multilingual survey tool opens access for other interested partners from society and science in the spirit of the Citizen Science idea. InsituDB consists of four parts. Part one collects agroecological data directly in the field by entering measurements and estimations directly into corresponding input masks of the three surveys available. Part two includes optional data entry of laboratory measurements, which are typically completed after collection in the field by analysis of samples in the laboratory. Part three covers data transfer from the field instruments to the storage and processing server, which is fast, reliable, and redundant to minimize data loss. The fourth part of InsituDB is the visualization and dissemination component, where raw data is quality checked, processed, aggregated, visualized, and prepared for download. Consequently, this enables a wide range of applications in the context of precision agriculture and near-real-time validation of remote sensing data.
The digital data management of the InsituDB approach from the field to the data portal for standardized data collection, processing, and provision of agroecological information minimizes the steps and time required between data collection, information provision and knowledge transfer. The core component of our framework provides datasets according to open-access and FAIR principles, offering the advantage of making this information available and usable for various applications in a timely manner. Providing the data in state-of-the-art data exchange formats, such as CSV, JSON or Web-Mapping services increases interoperability and use in multiple applications. The freely available provision of aggregated datasets, as well as the low-threshold access to validated raw data further expands the utilization of the datasets. Visualization and access are accomplished through innovative geospatial technologies, including timely quality control.
The InsituDB platform demonstrates how highly specific scientific data collected through a complex sampling design can be used in a variety of ways and offered to different users and stakeholders through state-of-the-art data management and visualization techniques. The development and integration of the InsituDB platform in scientific research and teaching concepts at university level benefits the education of young scientists.
Sampling strategies for land vegetation should be developed to capture the spatial and temporal dynamics of vegetation/plants. The spectral distortion on the signal received at sensor level due to the optical path across vegetation structures and temporal changes, both diurnal and seasonal trends, must be evaluated for a correct interpretation of the retrieved vegetation traits. In this work, the leaf and canopy reflectance variability in the PRI spectral region (i.e., 500 – 600 nm) is quantified using different laboratory protocols that consider both instrumental and experimental set-up aspects, as well as canopy structural effects and vegetation photoprotection dynamics. Current rapid technological improvement in optical spectroradiometric instrumentation provides an opportunity to develop innovative measurements protocols, where the remote quantification of the plant physiological status can be determined with a higher accuracy in close-range remote sensing approaches. We studied how an incorrect characterization of the at-target incoming radiance is translated into an erroneous vegetation reflectance spectrum and consequently resulting in an incorrect quantification of PRI. Our results corroborate the hypothesis that the commonly used method to estimate the at-target surface incoming radiance with a horizontal white reference panel, produces a bias between the real photosynthetic plant surface reflectance factor and the remotely estimated top-of-canopy reflectance factor. The biased characterization of the at-target incoming radiance translated respectively into a 2% overestimation and a 31% underestimation of chlorophyll content and PRI-related vegetation indexes. We then investigated the dynamic xanthophyll pool and intrinsic Chl vs. Car long-term pool changes affecting the PRI spectral region. For leaf and canopy experiments consistent spectral behaviours were observed. The plants adapted to the sun showed a higher optical change in the 500 – 600 nm spectral range and a higher capability for photoprotection during the light transient time when compared to shade-adapted plants. The results of this work highlight the importance of well-established spectroscopy sampling protocols for detecting subtle spectral features in remote sensing studies.
Actual and accurate maps of crop types are important information needed in many operational scenarios to help monitoring environment and shaping agricultural policies. They are also a necessary step in many further analyses, e.g. crop yield prediction, drought monitoring, field abandonment detection due to conflicts or migrations, etc. To produce such maps on large areas, like country-wide or regional scales, a satellite data are used as they constitute relatively cheap and efficient way to achieve highly accurate and time-consistent results. It is especially evident in the regions where other sources of crop information (e.g., governmental statistics) are sparse. To achieve the best results of crop type mapping using satellite imageries Machine Learning (ML) methods are used, among which the supervised approaches are reported to outperform the unsupervised ones. Supervised methods require a representative dataset of reference samples, which are used to train models being able to map full scenes. Such reference data is traditionally collected using in-situ campaigns, which usually involves manual work of enumerators who need to visit different parts of an area of interests to geo-localise a significant number of fields for each considered crop type. This work is costly and time consuming, so we investigate here different approaches aiming at a reduction of manual data collection efforts. It includes: (1) a utilisation of drone data together with manual data collection campaign to automatically extract additional reference samples, and (2) an employment of training samples from the same area but acquired during other year/season or collected for other areas within the same eco-climatic region. The extended training datasets obtained using abovementioned methods are next used as an input to ML-based classification system, which uses Sentinel-2 time series data to map large areas based on Random Forest classifier. The performance of extended dataset is compared to the performance obtained using original dataset collected by enumerators during field inspections.
Our experiments are conducted using four datasets consisting of both drone orthomosaics and reference shapefiles with crop type information. Two of them were collected for Kasungu district in Malawi: (A) the first one acquired in May 2018, the second in September 2018. The other two datasets were collected in Gaza province of Mozambique (in May 2019 and May 2021). Each of the datasets consists of about 30-40 RGB orthomosaics covering about 0.1 to 1 sq km with ground resolution varies between 1 and 7cm. The orthomosaics were acquired at different part of a day, so different light and shadows characteristics are present. Other important issues include presence of weeds, significant component of harvested fields, early stages of plant development and mixed crops. In all datasets Maize is a dominant crop type and other major crop types varies, but usually include Cassava, Cow peas, Groundnuts and Rice.
The first method that we investigate for the reduction of manual work dedicated to field campaign uses drone data covering areas where enumerators were sent to collect referenced polygons with crop type information. Using collected reference polygons, we train a classification model which can classify whole drone scenes into several non-crop and several crop classes. Non-crop classes have been added to properly detect crop mask. By experimenting with different number of polygons forming training dataset we show how far we can reduce the number of manually collected crop polygons to preserve the assumed classification accuracy of Sentinel-2-based large-scale classifier thanks to the additional samples detected on drone images. For classification of drone images, we use convolutional neural networks (CNNs) pretrained on computer vision datasets like ImageNet. This allows to take full advantage of visual contextual information analysed in this kind of networks, which is particularly suitable for the problem of detecting crop types on image data with centimetres resolutions, where plant structures are visible together with surrounded bare soils and/or weeds. As we have shown in our previous work, without the knowledge transferred from computer vision, the networks could not be trained from scratch to produce reliable results. On the other hand, applying simple shallow networks or Random Forest for detecting crop types on such drone data perform much worse than pretrained CNNs.
The second approach that we investigate is based on the same pretrained CNNs architectures and uses datasets which are distant in time and/or space. In this scenario the drone data classification model previously trained on other campaigns are fine-tuned using data from a current reduced in-situ campaign. We investigate the level of a reduction of a collected ground truth information that allows to maintain an assumed level of accuracy of Sentinel-2-based classification.
The presented work compares different approaches for the reduction an amount of ground truth data. Results could be further combined with cost models to plan campaign strategies which are the most economically efficient.
Information of forest growth is important for a wide range of environmental applications, from estimations of terrestrial carbon cycle globally, to sustainable forest management. Moreover, assessing forest growth allows us to understand the role (increase our knowledge) of the related ecosystems goods and services. The traditional way to measure forest growth is via field measurements, at the expense of a vast use of resources (such as time and cost). When scaling up to large areas, forest growth is usually characterized using long-range remote sensing instruments, such as EO satellites or airborne LiDAR. Thus, providing an estimate of forest growth at large scale. Nevertheless, these techniques fail to capture the complete vertical structure estimation of forest stands. Terrestrial laser scanning (TLS) provides a non-destructive characterization of the structure of forests in a non-destructive way.
Terrestrial LiDAR is a powerful tool for assessing forest structure, allowing us to capture the three dimensionality of forest stands in a level of scrutiny not achievable using established non-destructive (even destructive) techniques. This potential has been widely presented in related literature, where the main focus has been estimating easy-to-measure parameters, such as diameter at breast height (DBH), or height. However, thanks to repetitive measurements TLS has the potential - to provide forest structural parameters together with dynamics at a high level of detail, not only at a plot-level biomass, but at individual trees and even individual branches.
Our study area is located in Loobos, an evergreen coniferous forest in the Netherlands. Two TLS fieldwork campaigns were done in 2011 and 2021 with a RIEGL VZ-400 terrestrial LiDAR. Both datasets were aligned onto each other and objects were semi-automatically extracted. Further, leaves from trees were digitally removed and Quantitative Structure Models (QSM) were used to model the main stem and main branches. Our results show that volume growth can be estimated directly from the point clouds and branch growth can be detected from modelled branches. We have demonstrated that changes in tree structure and growth can be effectively detected and estimated from LiDAR scans.
Radiata pine (Pinus radiata D. Don) is the most widely planted exotic pine species in the world and large areas have been established, especially in the Southern Hemisphere. In New Zealand, radiata pine is the dominant plantation species and constitutes 90% of the currently 1.7 Mha plantation area. Up to 2.8 Mha of new forest must be planted until 2050 to achieve the country’s carbon-neutral target. Climate change brings great risk to future forest productivity in New Zealand as trees have non-linear growth responses to changing carbon dioxide concentrations, air temperature and water stress. Research clearly shows that different radiata pine genotypes vary markedly in their environmental preferences. In order to match the genotype to each site, a phenotyping platform for the trees is being developed.
This platform includes measurements at several scales. Reference measurements of needles and the trees itself include needle reflectance, needle pigment and nitrogen contents, photosynthesis parameters, and structural tree parameters. Hyperspectral reflectance measurements of potted trees are conducted using a camera installed on a 2 m high fixture and tree pots on a conveyor belt. Plantation trees are regularly measured using unmanned aerial vehicles (UAVs) equipped either with a hyperspectral camera or a laser scanner.
The data are used to characterize plant health, especially needle water and pigment content, and infections with dothistroma and red needle cast. Measurements of nutritional deficits, biochemical limitations on photosynthesis and of long-term effects of water stress using hyperspectral data have been successful (Watt et al., 2019, 2020, 2021).
Additionally, plant growth and structure is measured using spectral and laserscanning data. First results show strong correlations between the UAV hyperspectral data and structure parameters measured by laser scanning. Thus, close-range remote sensing is considered a powerful tool for rapid phenotyping of radiata pine plantations and the methods will be transferred to large areas.
References:
Watt, M.S., Pearse, G.D., Dash, J.P., Melia, N. & Leonardo, E.M.C. (2019): Application of remote sensing technologies to identify impacts of nutritional deficiencies on forests. ISPRS Journal of Photogrammetry and Remote Sensing, 149, 226-241, http://doi.org/https://doi.org/10.1016/j.isprsjprs.2019.01.009
Watt, M.S., et al. (2020): Monitoring biochemical limitations to photosynthesis in N and P-limited radiata pine using plant functional traits quantified from hyperspectral imagery. Remote Sensing of Environment, 248, 112003, http://doi.org/https://doi.org/10.1016/j.rse.2020.112003
Watt, M.S., et al. (2021): Long-term effects of water stress on hyperspectral remote sensing indicators in young radiata pine. Forest Ecology and Management, 502, 119707, http://doi.org/https://doi.org/10.1016/j.foreco.2021.119707
Street-level imagery holds an immense potential to scale-up in-situ data collection. This is enabled by increasing computing resources and cheap high quality cameras along with recent advances in deep learning. We present a framework to collect and extract crop type and phenological information from street level imagery using computer vision. During the 2018 growing season, high definition pictures were captured with side-looking action cameras in the Flevoland province of the Netherlands. Every month from March to October, a fixed 200-km route was surveyed collecting one photo per second resulting in a total of 400,000 geo-tagged photos. In 220 specific parcels, corresponding to 200,000 photos, detailed crop phenology observations were done for 17 crop types: carrots, green manure, grassland, maize, onion, potato, summer barley, sugar beet, spring cereals, spring wheat, tulips, vegetables, winter barley, winter cereals and winter wheat. Classification was done using TensorFlow with a number of well-known image recognition models, primarily based on transfer learning with convolutional neural network modules (MobileNet). A hypertuning methodology was developed to obtain the best performing model among 160 models. This best model was applied on an independent inference set discriminating crop type with a Macro F1 score of 88.1% and main phenological stage at 86.9% at the parcel level. Potential and caveats of the approach along with practical considerations for implementation and improvement are discussed. The proposed framework speeds up high quality in-situ data collection and opens avenues for massive data collection via automated classification using computer vision.
Meeting the needs of nature protection in the management of forest stands and agricultural lands and their adaptation to climate change requires precise information about (micro)climatic conditions. This information can be very efficiently derived through Earth observation (EO) data, including satellites, unmanned aerial systems (UAS), aircrafts, etc., or ground measurements. However, there is still a lack of knowledge about filling the gap between these scale-different approaches and their fusion for long-term monitoring of climate change mitigation.
Our testing site allows us to obtain ground truth information from dozens of sensors capturing local temperatures, evapotranspiration, ground wetness, groundwater levels, tree biometric parameters, and others on the study area of more than 500 ha. The locality also has an advanced irrigation system that allows us to control the amount of groundwater. Our project uses those ground data together with precise hydrological, pedological, and botany knowledge of locality to evaluate the temporal dynamics of landscape and microclimate. We observe a change of locality under upcoming climate change and (micro)climate change caused by various newly built landscape elements (e. g., avenues, ponds) or different agronomical practices, whose task is increasing landscape resilience.
The project aims to connect our ground information with plenty of remote sensing data from satellites (mainly Sentinel-1-2-3), UAS-borne, and airborne (multispectral, thermal, LiDAR, hyperspectral) sensors to estimate key (micro)climatic parameters. We also observe and predict the reaction of the landscape to different climatic changes. The goal is to develop a set of models to estimate key climatic parameters which will be reliably applicable to broad landscape types. We will also extrapolate our findings to predict the impact of climate change on the central European landscape.
We present a new hand-held system, LITERAL, capable of accurately measuring several crop characteristics and can be used for validation activities of satellite products. Conversely to IOTs that allow focusing on the temporal monitoring of crop status while they represent a very small part of the canopy due to their reduced footprints, LITERAL is hand held and therefore allows a more exhaustive spatial coverage of the crop. It can be considered as the new generation of traditional devices such as Digital Hemispherical photography, LAI2000, or ACCUPAR that are currently used to measure Green Area Index. It meets the need for economical, easy-to-use but precise measuring means for monitoring trials in small plots or a network of agricultural plots. LITERAL is also capable to derive several important crop characteristics such as cover fractions, Plant, Green and Senescent area indices, as well as plant or ears density, and crop height.
LITERAL is a probe equipped with three high resolution RGB cameras. All the sensors are connected to the acquisition unit that triggers the cameras, stores the data and communicates with a tablet PC which allows designing the measurement protocols via a user-friendly graphical interface. It minimizes the user intervention to allow fast acquisition in the field. The data are automatically analyzed by post-processing advanced algorithms: semantic segmentation, object detection by deep learning, colorimetric analysis, and stereovision. These algorithms are configured per crop in order to obtain maximum traceability and precision. The quality of the images and the multiple possible configurations allow LITERAL to be used for many purposes: monitoring the growth of field crops, characterization of mixed crops, quantification of symptoms of leaf diseases, measurement of the density of plants or ears. Ergonomic and scalable, LITERAL is currently used by technical teams in France, Portugal and Australia for phenotyping applications during the last two years. Wider dissemination as well as other applications are planned from 2022 and other applications.
Estimation of aboveground biomass in clover-grass mixtures using UAV-based vegetation indices and canopy height
Konstantin Nahrstedt (1), Tobias Reuter (2), Dieter Trautz (2), Thomas Jarmer (1)
1 Institute of Computer Science, Osnabrück University, 49090 Osnabrück, Germany, konstantin.nahrstedt@uni-osnabrueck.de, thomas.jarmer@uni-osnabrueck.de
2 Faculty of Agricultural Sciences and Landscape Architecture, University of Applied Science Osnabrück, 49090 Osnabrück, Germany, tobias.reuter@hs-osnabrueck.de, d.trautz@hs-osnabrueck.de
Clover-grass mixtures are used in agriculture primarily as forage crop and natural source of nitrogen for subsequent crops. However, clover-grass stands are characterized by high spatial heterogeneity in terms of species composition and biomass. Especially for developing appropriate management recommendations biomass is an important parameter to quantify plant stand structure. Currently, the determination of biomass in clover-grass fields is still often performed with laborious manual measurement methods. UAV-based image data offer an efficient possibility for multi-temporal monitoring of field structure development, in which plant parameters can be estimated in high temporal and spatial resolution. Based on this, recommendations for field management can be issued at different phenological times.
For this purpose, drone-based multispectral images were acquired at regular intervals on an organic managed clover-grass field in Osnabrück (Germany). For the quantitative evaluation of the phenological development of clover-grass stands, images were acquired during three flights between the second and third cut. Destructive in-situ biomass measurements were collected at each time stamp. To model the field-measured biomass multispectral vegetation indices were calculated. Normalized Difference Vegetation Index (NDVI) and Ratio Vegetation Index (RVI) are suitable indices to work out different structures in plant crops and were therefore used to estimate (fresh matter) biomass in this study. Furthermore, Structure-from-Motion (SfM)-based canopy height was included as modeling parameter. The suitability of spectral and spatial information for biomass estimation was tested by contrasting the model performances of each parameter. In addition, it was tested whether a combination of different parameters provided better biomass predictions. The biomass was first modeled by a multitemporal approach with all recording dates. Subsequently, temporal effects were analyzed by calculating regression models for each individual date.
The multispectral indices performed similar with an R² of 0.61 (NDVI) and 0.64 (RVI) for the combination of all acquisition dates while the SfM-based biomass estimation exhibited high modeling quality (R² = 0.73). A combination of these parameters also provided added value, since spatial and spectral information were merged. Thus, both the individual plant growth and the reflectance behavior are taken into account in the evaluation of crop development. An R² of 0.76 showed a high accuracy in estimating clover-grass biomass using multispectral indices and SfM-based canopy height in combination. The investigation of individual time steps of biomass estimation showed that the choice of the recording date has a clear impact on prediction quality. At the first recording date, the experimental field was characterized by a high degree of spatial heterogeneity, since the vegetation cover had not yet closed by this time. In particular, soil segments influenced the reflection signal and reduced biomass estimation accuracy. This is contrasted with higher estimation accuracy at the recording time just before harvest. Spatial analysis of predicted biomass based on UAV image data showed a lower amount of biomass in the first imaging date than in subsequent dates as well as a heterogeneous distribution over the entire study site in all acquisition dates. Especially with regard to the multitemporal approach, the results of this study can be used as basis for issuing a site-specific management recommendation for field management by transferring model predictions to UAV imagery data.
In the context of HYPERNETS project, which has developed the relatively low cost hyperspectral radiometer HYPSTAR® (and associated pointing system) for automated measurement of water and land bidirectional reflectance, the tidal coastal marsh in the Mar Chiquita (Argentina) lagoon was characterized as a test site for validation of radiometric variables [Piegari 2020]. This siteis a coastal habitat that provides ecosystem services essential to people and the environment [Assessment, Millennium Ecosystem 2005] and the vegetation is dominated by Sporobolus densiflorus [Trilla 2016]. There is evidence that growth and photosynthetic apparatus of this species is negatively affected by the herbicide glyphosate [Mateos-Naranjo 2009], which is extensively used in the Argentinean agricultural production [Aparicio 2013]. Previous studies have shown the potential of remote sensing to monitor plant injury from glyphosate using hyperspectral data [Huang 2012, Zhao 2014]. In particular, NDVI (Normalized Difference Vegetation Index) and PRI (Photochemical Reflectance Index, an indicator of photosynthesis activity) are spectral indices typically used to evaluate plant conditions. In this context, the HYPSTAR® instrument will provide high quality in situ reflectance,at fine spectral resolution (10 nm FWHM) in the 400-1700nm range with automated measurements every 30 mins, useful for the validation of surface reflectance data from all present and future earth observation missions and to monitor the health status of the vegetation. Thus, allowing to further explore if herbicides effects can be detected using spectral indices (designed for green vegetation) in natural environments characterized by the presence of a large fraction of standing litter - such as the Buenos Aires Atlantic coastal marshes.
In this study we sought to determine if it is possible to detect the effect of glyphosate on S. densiflorus spectral response using chlorophyll fluorescence and hyperspectral data. To achieve this, samples of S. densiflorus adult clumps were taken in Punta Piedras (35°34'40.1"S 57°15'11.9"W Buenos Aires, Argentina). Clumps of about 18 cm diameter were planted in 21 individual plastic pots, with a diameter and height of 24 cm and 28 cm, resp., filled with marsh soil. Pots were randomly separated in three sets (seven pots per treatment) according to two doses of Glyphosate-based herbicide (GlacoXAN TOTAL; 43.8 g active ingredient/100 ml, Argentina) with 876 g a.i./ha and 7200 g a.i./ha and an untreated control. The herbicide was administered homogeneously over the leaves surface, early in the morning and in absence of wind, using a pulverizer (250 ml of spray volume). Photosynthetic parameters were acquired in random fully developed leaves attached to the plants using a portable fluorometer PAR-FluorPen FP 100-MAX-LM of Photon Systems Instruments (Czech Republic). Leaves were dark-adapted for 20 min and then measurements were performed following the OJIP protocol [Stirbet 2011]. Radiometric measurements were obtained using a field spectrometer FieldSpec3® Analytical Spectral Devices (ASD), Inc. (Boulder, Colorado), which covers the spectral range between 350 and 2500 nm. Reflectance spectra at leaf level were acquired with a Plant Probe and Leaf Clip accessories (ASD) and with 7 pots per scene at canopy level were carried out, swapping the pots so that each of the 7 were placed in the center to generate 7 different scenes per treatment.
The photosynthetic parameters derived from OJIP test and reflectance measurements at leaf and canopy levels were obtained for the three treatments 1, 8 and 15 days after treatment (DAT). Analysis of variance (ANOVA) tests were performed together with a LSD Fisher test to evaluate significant differences. Results show that by means of photosynthetic parameters and spectral indices, crop injury of glyphosate in S. densiflorus could be early detected. The maximum quantum yield of photosystem II (Fv/Fm), which is considered to be a sensitive indicator of plant photosynthetic performance, shows differences between the control and low and high dose treatments (p < 0.05).A significant decrease in Fv/Fm respect to the control is observed for low and high dose treatments, at 8 DAT and 1 DAT, respectively. Among the several spectral indices that were tested as indicators of glyphosate injury at leaf and canopy levels, it is highlighted that changes in PRI at canopy level are detectable 15 DAT for both low and high doses (p < 0.05). In the frame of hyperspectral missions, like PRISMA and the more dedicated high-resolution Fluorescence Imaging Spectrometer (FLORIS) planned on FLEX mission, these results are promising for the early detection of loss of marsh vegetation from remote sensing.
References
Aparicio, V. C., De Gerónimo, E., Marino, D., Primost, J., Carriquiriborde, P., & Costa, J. L. 2013. Environmental fate of glyphosate and aminomethylphosphonic acid in surface waters and soil of agricultural basins. Chemosphere, 93(9), 1866-1873.
Assessment, Millennium Ecosystem. 2005. Ecosystems and Human Well-being: Wetlands and Water, 5. Washington, DC: World Resources Institute.
Huang, Y., Thomson, S. J., Molin, W. T., Reddy, K. N., & Yao, H. 2012. Early detection of soybean plant injury from glyphosate by measuring chlorophyll reflectance and fluorescence. Journal of Agricultural Science, 4(5), 117.
Mateos-Naranjo, E., Redondo-Gomez, S., Cox, L., Cornejo, J., Figueroa, M.. 2009. Effectiveness of glyphosate and imazamox on the control of the invasive cordgrass Spartina densiflora. Ecotoxicology and Environmental Safety, 72(6), 1694 – 1700.
Piegari, E., Gossn, J. I., Juárez, Á., Barraza, V., Trilla, G. G., & Grings, F. 2020. Radiometric Characterization of a Marsh Site at the Argentinian Pampas in the Context of Hypernets Project (A New Autonomous Hyperspectral Radiometer). IEEE Latin American GRSS & ISPRS Remote Sensing Conference (LAGIRS) (pp. 591-596). IEEE.
Stirbet A, Govindjee. On the relation between the Kautsky effect (chlorophyll a fluorescence induction) and Photosystem II: basics and applications of the OJIP fluorescence transient. 2011. J Photochem Photobiol B., 104(1-2):236-57.
Trilla, G. G., Pratolongo, P., Kandus, P., Beget, M. E., Di Bella, C., & Marcovecchio, J. 2016. Relationship between biophysical parameters and synthetic indices derived from hyperspectral field data in a salt marsh from Buenos Aires Province, Argentina. Wetlands, 36(1), 185-194.
Zhao, F., Huang, Y., Guo, Y., Reddy, K.N., Lee, M.A., Fletcher, R.S., Thomson, S.J.. 2014. Early Detection of Crop Injury from Glyphosate on Soybean and Cotton Using Plant Leaf Hyperspectral Data. Remote Sens.,6, 1538-1563.
Due to the technological development and advantages of UAV-based remote sensing solutions, new possibilities arise for monitoring agricultural crops while efficiently sensing crop biophysical parameters (CBP) in a close-range scenario. The approach presented here attempts to highlight the capabilities of a data fusion approach that combines LiDAR (RIEGL - miniVUX-1UAV) and multispectral data (Micasense - Altum) to assess CBPs like dry above ground biomass (AGB) for maize, which is one of the worldwide most cultivated crops. Due to its canopy structure, it is relatively hard to assess plant parameters, especially at later phenological stages where usable parts such as cobs are hidden. The combination of both LiDAR and multispectral data not only allows the estimation of AGB, but also helps to evaluate phenological stage-specific growth and vitality parameters. This could help farmers either directly via close-range monitoring or indirectly via Earth Observation (EO) missions, powered with close-range UAV-based ground truth models to improve plant management at the macro scale.
With a relatively low flight height of 20m above ground for the LiDAR system (resulting in an average point spacing of approx. 0.05m), and 25 m for the multispectral system (GSD approx. 0.01m), the plant physiology and interrelated CBPs can be assessed with high precision, potentially taking into account different growth stages due to narrow flight date intervals. LiDAR derived information (mean range corrected single return intensity, mean return ratio (first return/all returns), and mean height) are then combined with multispectral related vegetation indices, using the six available spectral bands. In addition to the usual corrections an additional correction processing chain was developed and tested for the LiDAR data.
A support vector regression is applied on ground truth data from two appointments summing up to a total of 96 samples, with further additional validation of the created model. Resulting R² of more than 0.7 for the dry AGB, is a promising result for promoting non-destructive UAV-based ground truth solutions, to support spatial upscaling for EO missions.
Earth observation data acquired by the Copernicus satellites can be deployed for the frequent large-scale monitoring of vegetation parameters on agricultural fields. From the perspective of agriculture counsellors and farmers, an added value arises in particular, if those data are used to derive products, which address parameters that are relevant to management decisions. Such products should come along with a sound uncertainty assessment and management recommendations. For this, in-situ data, knowledge of processes, and local knowledge are required. In-situ data are mainly deployed to calibrate, update and validate satellite-aided retrieval models. Knowledge of processes and local knowledge provide a basis to develop management recommendations. In-situ data and local knowledge are collected in different projects from a wide variety of people, including scientists as well as co-researchers (e.g., citizens). Thus, the measurement and sampling design vary for measured and observed parameters, which causes variable data quality. This makes it difficult to merge data of the same kind from different projects and therefore hampers the reuse and automatic analysis of data. Modular designed and customizable applications for mobile devices (Apps) represent a framework that can help to foster the standardisation of data sampling methods and strategies. At the same time, they provide enough flexibility to be adjusted for the use in various scenarios.
The FieldMApp represents such a modular designed application. Its open structure permits the reuse of existing software components, customizable adjustments, and the addition of new modules that are necessary to fulfil specific requirements of a research project at hand. The FieldMApp was built exclusively on open-source libraries. It can be compiled to run on an Android or iOS operating system, allows for the integration of internal and external sensors, and is designed to work either in an online or offline mode. Acquired data are stored in a machine-readable format. The FieldMApp concept includes tools that may support data validation and uncertainty assessments. The overall uncertainty of acquired data is estimated by considering sources of systematic and random errors, which depend on the modular set-up. Accordingly, the FieldMApp provides mutually compatible data sets, thus increasing their reuse potential.
In this contribution, the concept and structure of the FieldMApp will be presented. The application of the FieldMApp within the frame of a use case of the project Agrisens – DEMMIN 4.0 will be demonstrated. In this use case, low yield areas on acreages were identified and characterised by farmers during on-site agricultural operations. The relevance of such data for agricultural management and the common agriculture policy will be outlined. Examples for other fields of application within the agricultural sector will be highlighted.
With the latest generation of earth observation sensors and advanced processing techniques, remote sensing technologies are increasingly enabling to assess not only land cover but also land use. This is in line with the current political agenda and the rising need for spatially explicit and regularly updated land use information to support environmental monitoring programmes. However, mapping land use based on satellite time series remains more difficult than mapping land cover. One key limiting factor is suitable reference data for model calibration and validation. For grassland management, which can consist of various activities throughout the year, high temporal in-situ information is needed to fully assess the extent to which management can be remotely assessed.
We used freely available and daily webcam images to investigate the extent and accuracy of grassland use captured by Sentinel-2 time series. For 57 webcams distributed across Switzerland, one to three reference locations each were defined and georeferenced, resulting in a total of 82 reference locations and around 27’000 daily interpretations of grassland use. We extracted and processed Sentinel-2 NDVI time series for those locations and developed an algorithm to detect main management events such as mowing or grazing.
Our findings show that management events represented within the NDVI time series were in most cases (>80 %) indeed related to mowing or grazing. In contrast, a large proportion of the mowing (around 40 %) and most grazing events (around 80 %) recorded on the webcams were not detected in the NDVI time series. Visual inspection of the NDVI time series revealed that grazing events often showed little to no signal, but in the case of mowing, most of the omitted events might be captured by fine-tuning the algorithm. The large omission error for grazing might be explained by the fact that many of our webcams showed extensively managed mountain pastures with a low stock density. In general, the density of clear-sky and snow-free observations seems to be essential, as NDVI values recovered within one to two weeks after mowing / grazing. Furthermore, mowing and intensive grazing could not be distinguished and suggest that significant drops in NDVI should be interpreted more generally as removal of biomass within a short period of time. In addition, first visual inspection indicates that fertilisation is not sufficiently reflected in the NDVI time series to be detected, even though such events were registered in one third of the webcams.
The comparison with daily webcam images proved to be useful for further improvements of the algorithm and to better understand limitations and possibilities of satellite-based grassland use assessment. Furthermore, this highly temporal reference data allow to test whether the integration of additional remote sensing data (e.g. Sentinel-1, PlanetScope) is beneficial.
The modernized Common Agricultural Policy 2023–2027 in the European Union highlights the need for Paying Agencies to perform checks on a much thinner time-scale and to quantify the impact of various practices on natural ecosystems. The introduction of participatory sensing and smart sensors enable the cost-effective establishment of an additional data layer to complement, validate and enhance the predictive performance of critical agro-environmental –related parameters. In particularly, the advent of low-cost, portable and handheld spectrometers operating in the electromagnetic spectrum and realized using microelectromechanical systems enables the rapid and non-destructive measurement of a soil’s reflectance spectrum.
To this end, we propose a methodology based on a set of handheld SWIR (1750–2150 nm) soil spectrometers for real time in situ estimation of soil properties, leveraging existing Soil Spectral Libraries (SSLs) and efficient deep learning techniques. This novel sensing system was tested under real field conditions in 180 fields during the summer of 2021. A collection of 240 distinct topsoil samples distributed in six different regions in Lithuania and Cyprus has been measured under both in situ and laboratory conditions. For the laboratory case, sample pre-processing (air-drying and passing through a 2mm sieve) for ambient factors’ effects removal has been performed. The acquired spectral signatures formed two sets over which a Convolutional Neural Network has been developed, aiming to eliminate the effects to the spectral signatures caused by moisture, shadow, or by the existence of non-soil materials through mapping the in situ spectral signatures to the ones acquired after laboratory pre-treatment. This technique eliminates the effects of ambient factors in spectral signatures and helped the development of a new dataset of “transformed” spectra. The spectra values of this dataset acted as predictors for the estimation of Soil Organic Carbon (SOC) which exhibited enhanced predictive performance (R2=0.80), which was evaluated over an independent test set containing 20% of total samples, compared to the model developed using as predictors the original in situ spectra (R2 < 0.2).
The proposed approach broadens the possibilities of merging collections of in situ spectra with existing SSLs and further highlights the need for development of a universally accepted sensing protocol. Furthermore, SOC or other soil properties that can be monitored with diffuse reflectance spectroscopy can be easily scaled up and act as bridge to Earth Observation data in terms of a bottom-up approach and in support of the Copernicus in situ component, under the hypothesis of reliable estimations of the targeted soil quality indices.
Agricultural landscape features are small elements of non-productive semi-natural vegetation embedded in agricultural landscapes. This definition includes several characteristic elements of traditional and historical European agricultural landscapes, such as hedges, ponds, ditches, trees in line or in group or isolated, field margins, terraces, dry-stone or earth walls, planted areas, individual monumental trees, springs, or historic canal networks. These elements had important functions linked to traditional agricultural management practices. In the 20th-21th century, some functions of landscape features have diminished: for example, rural populations are less and less reliant on hedgerows for fencing their livestock, or firewood from field coppices. Nevertheless, other functions, like windbreaks and erosion protection remained intact, and “new” functions, like the maintenance of agricultural biodiversity have also emerged instead. It is recognized that these small vegetation fragments have a key role in maintaining biodiversity and ecosystem services in the European agricultural landscapes. In fact, as agricultural areas occupy around 45% of EU27, landscape features have gained new importance in addressing the key environmental challenges of the 21st century. The role of landscape features in agricultural land is highlighted in several key strategies and directives of the EU policies, including for example the Common Agricultural Policy, the Biodiversity Strategy, the Water Framework Directive, or the Nitrate Directive. Accordingly, the share of landscape features could be a key indicator of the ecological condition of agricultural landscapes in the EU.
Despite their recognized importance, the mapping and monitoring of landscape features remained a challenge for several reasons. Landscape features are small heterogeneous objects with special characteristics that make their mapping difficult. Their reliable identification would be possible using very high resolution (VHR) data over large areas, preferably in multiannual time series, so that a key distinction between permanent and temporary features could be made. The definitions and typologies of the landscape features also need to be harmonised across policy sectors, in a way that is accessible for operationalization in remote sensing applications.
In the European Union (EU), a tri-annual surveyed sample of land cover and land use has been collected since 2006 under the Land Use/Cover Area frame Survey (LUCAS). Starting from the upcoming LUCAS 2022 survey, a new dedicated Landscape Features (LF) module will be implemented in the agricultural landscapes all over Europe, taking a statistically balanced subset of 93,000 sampling units from the overall LUCAS sampling frame. In each sampling unit, a fixed grid of 41 equally-spaced subpoints will be assessed for the presence of landscape features, in two steps. The first step is a visual interpretation of high-resolution orthophotos, and the second step is a field-based verification within the framework of the LUCAS field survey. The data collected will provide an extensive reference data set, which can then be used for an unbiased area estimation of the main landscape feature types in all EU Member States and their subnational level (NUTS2). Accordingly, this dataset will provide a robust sample of ground truth records based on an operative definition and a simplified functional typology of landscape features, which can then be used to implement efficient workflows for the future identification and mapping of landscape features in agricultural land.
The multi-temporal capabilities of tower-based experiments are an essential component to disentangle the multiple causes behind the temporal variability of the microwave backscatter. For the case of vegetation covers, it is typically often challenging to distinguish between dry or fresh biomass changes, in addition to wind-induced motion effects. Supported by quasi-continuous acquisitions (every 15min) of the TropiScat-2 experiment which is operating since 2018 over a tropical dense forest at the Paracou test site in French Guiana (as an heritage of the TropiScat one from 2011 to 2014), we show that the diversity of observations conditions enables to isolate and to characterize the main causes of backscatter variations, especially through the diurnal patterns driven by convective effects or through seasonal variations driven by dry or rainy periods up to 500 mm/month. These results have provided a key database for the design of the BIOMASS mission interferometric and tomographic repeat-passes, but are also very relevant to anticipate the best ways to interpret the future signal and products variations with respect to meteorological observations. Nonetheless, the applicability of these results at wider-scales on tropical forests rises several questions, especially with respect to the adequacy between the local observations derived from the Guyaflux meteorological sensors and the spaceborne observations on hourly basis with a much coarser spatial resolution (about 10km). Fostered by the need of operational concepts at the time of BIOMASS acquisitions, our study will focus first on the selection of the most relevant meteorological parameters explaining the P-band backscatter variation patterns derived from TropiScat-2 time series, and then on the comparisons between our Tower-based local measurements and the ERA5 datasets derived from the reanalysis ECMWF products combining data modeling and assimilation. Finally, the upscalling questions and challenges regarding the radar observations from TropiScat-2 at P, L and C bands will be also addressed, given signal reconstruction possibilities from tomography and the opportunities of cross-comparisons at C-band measurements with Sentinel-1 time series.
The Precursore Iperspettrale della Missione Applicativa (PRISMA) mission of the Italian Space Agency (ASI) is evolving to become a great scientific success and is already providing excellent hyperspectral datasets since the end of its commissioning phase in January 2020. With the upcoming launch of the German Space Agency (DLR) Environmental Mapping and Analysis Program (EnMAP), as well as ongoing progress at ESAs Copernicus Hyperspectral Imaging Mission (CHIME), the amount of spaceborne hyperspectral datasets provided will increase to yet unknown dimensions. For the first time both, quality, and availability (temporal and spatial) of datasets will meet researchers’ requests. Nevertheless, real-world applications based on the mentioned datasets call for robust and well-tested algorithms and models. These are typically developed using ground-based measurements, mostly acquired using hand-held hyperspectral imaging (HIS) or sampling (HSS) devices on field trials. Here, the training of young researchers in the field of hyperspectral spectroscopy comes into focus. Furthermore, a thorough understanding of the matter can only be achieved, by a personal, hands-on approach and collection of HIS/HSS data.
Hand-held HIS/HSS platforms need to perform within the following three categories:
- Accuracy and range of wavelength reproduction
- Portability and accessibility of the platform
- Low initial cost and reparability
Existing platforms often lack in at least one of the above-mentioned categories. Representing the HSS market leader, the ASD FieldSpec-4 Hi-Res system is taken as an example. It features excellent optical characteristics, both in spectral range, as well as resolution / bandwidth. But both the high weight, as well as the overall fragility of the optical system, make it a challenge for in-field applications. Also, with an initial price in the higher five-digit category and a closed-source design, it is far from being accessible for smaller institutions or individual researchers.
To simplify the optical system, microspectrometers (MSP) may be considered. MSPs, despite being more expensive than a ground-up development of the optical design (e.g., Salazar-Vazquez and Mendez-Vazquez, 2020), greatly reduce the complexity of the optical system, by representing a compact assembly of a slit, grating and sensor. Laganovska et al. (2020) and Sandak et al. (2020), both implemented the Hamamatsu C12666MA MSP for environmental applications under artificial light sources, using the sensor to analyse Vitamin B12/Phosphate content in samples and detection of wood defects. Sonobe et al. (2021) have successfully shown the capability of the C12880MA MSP for the estimation of chlorophyll content of Zizania latifolia (water sprout).
The object of our study is the development of a hand-held, low-cost hyperspectral VIS-HSS platform for educational and scientific applications. It is intended to approach all of the above-mentioned requirements. The Tinker Scanner (Tisca) consists of a 12 cm x 12 cm cube, housing the optical, electrical, support and communication systems. The compact, rugged design and low weight (300g), make it versatile, widely portable, and accessible. The current prototype comes at a cost of merely € 700.
The optical system consists of two Hamamatsu C12880MA MSP. These feature a spectral range of 340 nm to 850 nm at a peak resolution of 15 nm and approximately 1.8 nm spectral sampling. To reduce the viewing angle effects induced by the 50 x 500 µm entrance slit, fused silica diffusors are used. Tisca features two main modes of operation: single-spectrometry-mode (S0) for applications in laboratory settings and live-reflectance-mode (S1) for hyperspectral sampling in field-trial conditions. One MSP is located at the bottom of the cube, the other on the top side.
Whereas S0 represents a more conservative take on HSS, S1 enables researchers to measure both, incoming and target-reflected light simultaneously (up to two full measurements per second), thus skipping the need for calibration with highly expensive and sensitive reference panels. To enable the usage under ambient light conditions, neutral density filters are applied.
For an easy and intuitive usage of the platform, a Raspberry Pi 4b microcomputer is implemented for data collection and distribution.
The Raspberry Pi functions as a server and access point for Tisca communications. A web interface based on the R Studio Shiny Package is hosted and allows full wireless control of the spectrometer (settings, live data view, processing and data download). It is accessible via an internet browser, with no need for additional software on the user side. The Tisca UI can be used simultaneously by several users connected to the internal WiFi, rendering it optimal for use in classroom environments.
Preliminary results of the first working prototype have shown promising potential. Both wavelength reproduction and accuracy meet the requirements. Wavelength conversion factors for the individual MSP channels are provided by the manufacturer. In first tests, raw DN (digital numbers) radiances of artificial light sources and target reflectances (e.g., vegetation, rocks, soil) were successfully assessed. Currently ongoing tests involve laboratory time-series measurements of leaves in the process of pigment decomposition to gain insight into the dynamics of chlorophyll and carotenoid signatures. For intercalibration purposes, readings are compared to ASD FieldSpec-4 standard measurements.
The possible range of scientific and educational applications with Tisca is manifold. One primary goal of the Tisca platform is to assist in model development for existing and upcoming multi- and hyperspectral missions. For this reason, Tisca will feature different retrieval modes. In one mode, the user will be able to simulate a remotely sensed (RS) pixel (within the MSP wavelength range), by walking the respective pathways on the target area. Three transects would form the sampling pattern parallel to the flightpath. The averaged spectra would subsequently be aggregated to simulate pixels of a medium-resolution spaceborne sensor. With a battery life of up to 10 hours, the collection of a dense times-series with a temporal resolution of up to two measurements per second is possible. As an example, the continuous monitoring of plant pigments in the 340 nm to 850 nm domain becomes feasible under open-sky conditions.
All documentation, schematic diagrams, firmware and STL files will be made available on the authors´ GitHub following the open hardware mindset of the project. A next prototype iteration of Tisca will be presented at LPS 2022 in Bonn, with the possibility of visitors to gain hands-on experience.
References:
Laganovska, K., Zolotarjovs, A., Vazquez, M., Mc Donnell, K., Liepins, J., Ben-Yoav, H., Karitans, V., Smits, K. (2020): Portable low-cost open-source wireless spectrophotometer for fast and reliable measurements. In: HardwareX, 2020 (e00108), 1 – 12.
Salazar-Vazquez, J., Mendez-Vazquez, A. (2020): A plug-and-play Hyperspectral Imaging Sensor using low-cost equipment. In: HardwareX, 2020 (e00087), 1 – 22.
Sandak, J., Sandak, A., Zitek, A., Hintestoisser, B., Picchi, G. (2020): Development of Low-Cost Portable Spectrometers for Detection of Wood Defects. In: Sensors, 2020(20,545), 1 – 20.
Sonobe, R., Yamashita, H., Nofrizal, A., Seki, H., Morita, A., Ikka, T. (2021): Use of spectral reflectance from a compact spectrometer to assess chlorophyll content in Zizania latifolia. In: Geocarto International, 2021 (6049), 1 – 13.
Land Use Land Cover (LULC) changes induced by human or natural processes drive biogeochemistry of the earth influencing the climate. Changes in the land cover due to anthropogenic activities enhance the heat emission from land surface and atmospheric temperatures increased Land Surface Temperature (LST). Due to complexity of landscapes it was very difficult to derive LST and environmental response relationships but temporal data acquired for the entire earth surface through space borne remote sensors has provide to make the bridge between the gaps. In this study, an attempt has been made to assess spatio-temporal dynamics of land surface temperature (LST) and to establish the relationship between Land Use Land Cover Change (LULCC) & Land Surface Temperature Change (LSTC) in a part of Muscat City, Oman. The present work aims to analyze the relationship between LULC and LST, determining the influence of LULC on LST. Landsat time series data for time period of in between 1985 and 2021 have been used for this study. The land surface temperature (LST) and land use and land cover (LULC) classes were retrieved and extracted from Landsat remote sensing data. The thermal infrared bands of Landsat remote sensing data help to retrieve LST with the help of ground based measurements. Mainly, the visible (blue, green, red), NIR, SWIR bands of Landsat remote sensing data were mainly used to retrieved and extracted land use and land cover (LULC) classes. The results showing that the land surface temperatures (LST) are significantly increased in the study region for the time period between 1985 and 2021. The results also showing that there are positive relationship between Land Use Land Cover Change (LULCC) & Land Surface Temperature Change (LSTC) in the study area. The results showed that LST is significantly affected by surface type, LST varied significantly across LULC types.
Uncontrolled, unplanned, and unprecedented urbanization characterizes most African cities.
Drastic changes in the urban landscape can lead to irreversible changes to the urban thermal
environment, including changes in the spatiotemporal pattern of the land surface temperature
(LST). Studying these variations will help us take urban climate change mitigation and adaptation
measures. This study is intended to map the effects of urban blue-green landscapes on LST using geospatial techniques in Addis Ababa, Ethiopia from 2006 to 2021. Object-based image analysis
(OBIA) method was applied for land use/land cover (LULC) classification using high-resolution
imagery from SPOT 5 and Sentinel 2A satellites. Moreover, LST was retrieved from the thermal
imageries of Landsat 7 ETM + (band 6) and Landsat 8 TIRS (band 10) using the Mono-Window
Algorithm (MWA). Furthermore, linear regression analysis was used to determine the relationship
of LST with normalized difference vegetation index (NDVI), normalized difference built-up index
(NDBI), and modified normalized difference water index (MNWI). Five major LULC classes were
identified namely, built-up, vegetation, urban farmland, bare land, and water. The result shows
that the built-up area was the most dominant LULC in the city and has shown a drastic expanding
trend with an annual growth rate of 4.4% at the expense of urban farmland, vegetation, and bare
land in the last 15 years. The findings demonstrated 53.7% of urban farmland, 48.1% of vegetation,
and 59.4% of bare land, was transformed into a built-up class from 2006 to 2021. The mean LST
showed an increasing trend, from 25.8°C in 2006 to 27.2°C and 28.2°C during 2016 and 2021
respectively. It was found that LST varied among LULC classes. The highest mean LST was
observed at bare land having an average LST value of 26.9°C, 28.7°C, and 30.1°C in 2006, 2016,
and 2021 respectively. While the lowest mean LST was recorded at vegetation with average LST
values of 24.3°C in 2006 and 26.0°C in 2021; and at water with mean LST of 25.5°C in 2016. The regression analysis showed a strong negative correlation between NDVI and LST, a strong positive correlation
between NDBI and LST, and a weak negative correlation between MNDWI and LST. The findings
of this study have indicated that LULC alteration had contributed to the modification of LST in
Addis Ababa during the period. The regression analysis results further revealed that built-up area
and vegetation cover plays a decisive role in the variation of LST in the city compared to urban
surface water. The findings of this study will be helpful for urban planners and
decision-makers while planning and designing future urban blue-green innervations in the city and beyond.
The ECOSTRESS thermal radiometer on the Space Station has a 70-m pixel scale and a subdaily to 5 day revisit interval. It resolves thermal patterns at sub-pixel scales relative to the highest resolution operational SST products, and is especially useful in coastal regions with complex shorelines, where it provides a seamless skin temperature product from coastal uplands, to the intertidal zone and the coastal ocean. We validated ECOSTRESS SST and at-sensor radiances with co-located cloud-free ocean pixels from other satellites, and to in-situ observations from NOAA-iQuam and shipborne radiometers, to establish a bias correction for use in coastal regions. We examined spatial variation in SST at different tide stages on tidal flats in Mont Saint Michel Bay, France (tidal range 10m), Arcachon, France (tidal range 5m) and in Galicia, NW Spain (tidal range 3m). ECOSTRESS resolves the position of the water line at all stages of the tidal cycle. This allows three important determinations (1) quantification of surface temperature changes during flood and ebb, (2) quantification of the degree of tidally dependent land contamination of operational SST product pixels, and (3) quantification of thermal stress during aerial exposure of intertidal surfaces. The high-resolution surface temperature observations from ECOSTRESS are at the spatial scale of commercial intertidal aquaculture of mussels, oysters, and clams, all of which suffer mass mortality during heat waves. Since ECOSTRESS can resolve temperature differences at hectare spatial scales, it provides a means for predicting differences in shellfish harvest and mortality among individual aquaculture plots, in a manner similar to the between field differences in thermal stress it can provide for terrestrial agriculture. It also provides a preview of the opportunities afforded by new missions like TRISHNA beginning in late 2024.
The work presented here aims to further the development of high-resolution products: Urban Heat Islands (UHI), Thermal Discomfort (TD) and raw Land Surface Temperature (LST) for the use in primarily the urban environment but also for green space assessment. Current products have the fundamental proof-of-concepts established but require further efforts to create viable services for use by user groups and decision makers.
A key objective was to provide the necessary robustness in urban heat products from satellite for both the scientific and commercial users. In this work, a refinement of the University of Leicester Optimal Estimation (OE) retrieval algorithm, used and refined in the ESA TIR-TRP study, has been adapted to the ASTER and LANDSAT 8 satellite data records. This application methodology provides not only a long data record achievable from Landsat providing urban planners, and the health sector as examples, the necessary underpinning data to enable justifications for proposed changes for future mitigation and adaption to be evidenced, but also leads into future missions such as LSTM. Building on the methods and user interactions through the DUSTI project enables the construction of a framework within which LSTM and other high-spatial resolution thermal infra-red satellite missions can provide operational and targeted resources to end-users.
Wildfires in the western United States produce significant social and economic impacts, and the fire season has been observed to shift earlier, and increase in frequency and severity with climate change. Wildfire burn severity is influenced by the availability of fuels (vegetation) to burn, as well as the flammability of the fuels which in turn is impacted by environmental stressors such as drought. Understanding the role that antecedent vegetation water stress has on the spatial pattern of burn severity is therefore of importance for enhancing the predictability and monitoring of wildfires. The ECOsystem and Spaceborne Thermal Radiometer Experiment on Space Station (ECOSTRESS) was launched in 2018 and provides high spatial (70m) and temporal (1 to 5 day revisit) information on vegetation water stress including evapotranspiration, evaporative stress index and plant water use efficiency. I discuss using ECOSTRESS data to characterize vegetation water stress preceding four major wildfires which occurred in the Southern California Geographic Area Coordination Center (GACC) in 2020. We use both long-term (annual) and short-term (growing season) hydrological indicators of vegetation stress in the year preceding the fire, as well as information on topography, and employ these in a random forest modeling approach to predict the spatial patterns of burn severity. We find that burn severity predictability is enhanced in regions with more severe topography (high elevation and steeper slopes). We also find differences in the relationships between the vegetation water stress and burn severity depending on vegetation type. Burn severity for evergreen needleleaf forests is mostly explained by vegetation water stress in the preceding year, whereas for grasslands, less stressed plants and higher values of evapotranspiration are the most important predictors, consistent with the notion that enhanced grass growth increases fuel amount. Our results indicate the potential for predicting the spatial variability of wildfire burn severity using high resolution remote sensing of vegetation water stress, and provide a framework for the application of the upcoming Surface Biology and Geology (SBG) mission to wildfire monitoring.
Land surface temperature (LST), latent and sensible heat fluxes are a strong indicator of warming climate trends and are affected by rising greenhouse gases (GHGs) and influence Earth’s weather and climate patterns. This is predominantly through the reduction of energy exiting Earth’s atmosphere, resulting in an increased energy budget. Key objectives for the UN Framework Convention on Climate Change (UNFCCC) investigates how Earth observations from Space could support the UNFCCC and the Paris Agreement in closing Earth’s energy budget imbalance. Improving global LST observations from satellite data in aid of improving climate warming predictions is crucial to fulfilling this.
All-sky LST observations are required and crucial for many climate applications. Clear-sky bias is a key problem with infrared observations and is a challenge for climate science. While the lower accuracy and spatial resolution of microwave (MW) LSTs can also be an issue, particularly as observations are required at increasingly higher resolution for model simulations. For these applications, a combination of both IR and MW LSTs remains a key step forward. We will use information on the differences between the validation and the inter-comparison activity products to correct the least accurate LST product. We will also use the relationship between LST and land surface air temperature (LSAT) to improve our understanding of the clear-sky bias.
Through a PhD project within the National Centre of Earth Observation (NCEO) and interfacing with the ESA Climate Change Initiative Land Surface Temperature project, we aim to better understand the diurnal variability in global LST. This will be achieved by creating the first fully integrated all-weather LST dataset that can be utilised against climate models and other temperature datasets. Here I will show some first results of understanding the merging of these LST data.
TRISHNA is an Indian-French high spatio-temporal resolution satellite which will provide users with global surface temperature measurements at local scale, for better monitoring of the water cycle.
The TRISHNA satellite embarks both an innovative multi-channel thermal infrared instrument and a visible and short-wave infrared instrument, that will scan the entire earth surface every 3 days. TRISHNA scientific objectives are linked to ecosystem stress and water use (better management of water resources), coastal and inland waters (water quality, fish resource, sea ice), urban microclimates monitoring (characterization of urban heat island), solid earth (detection of thermal anomalies), cryosphere (monitoring of snow and ice) and atmosphere (water content, cloud characterization).
The radiance in the TIR atmospheric window is dependent on the temperature and emissivity of the surface being observed. The retrieval of surface temperature (T_s) and emissivity from multispectral measurements is a non-deterministic process. Indeed, the total number of measurements available (N bands) is always less than the number of variables to be solved for (emissivity in N bands and one surface temperature).
The Temperature Emissivity Separation (TES) algorithm was initially developed by the ASTER Temperature Emissivity Working Group (TEWG) in order to efficiently tackle the issue of surface temperature/emissivity separation. The TES algorithm is a hybrid algorithm that capitalized on the strengths of previous algorithms, especially the Normalized Emissivity Method (NEM) and Minimum- Maximum Difference (MMD) algorithm. More particularly, TES algorithm is based on the hypothesis that over N_B≥3 channels in the TIR domain, the TIR emissivity spectrum of a natural surface is composed of at least one value close to unity.
In the context of the TRISHNA mission, we propose a new version of the TES algorithm: TRISHTES. TRISHTES is based on the fact that T_s can be expressed from the theoretical surface leaving radiance for each band i, L_(s,i) obtained after atmospheric correction and that multiple TRISHTES MMD relationships can be calibrated for multiple class of observed scenes. Each of these calibrations will be associated with a dataset of emissivity which characteristics differ from the other datasets and will be applied depending on the class of the observed scene. We defined four classes: the first “greybody” class which is characterized by spectra with εmin > 0.95, a second class that regroups spectra that classically characterizes the (εmin ; MMD) relationship for natural surfaces, a third class that regroups the spectra that follows a similar repartition than the classical one, but with lower εmin and a fourth class which contains the manmade spectra. Operationally, a first guess of the emissivity using TRISHNA VSWIR data will be derived and uses to classify the pixel into the four classes defined.
The TRISHTES algorithm is currently considered to become the operational algorithm for TRISHNA temperature and emissivity retrievals. TRISHTES allows an increase of performance compared to original TES method than can go up to a factor 2, and improve the performances of TES on graybodies, which include surfaces of interest for the TRISHNA mission such as dense vegetation and water bodies. Moreover, results show that performances on the retrievals are better than other methods (such as split windows methods) because of TRISHTES lower sensitivity to the uncertainties on the algorithm initialization.
Generation of long-term Land Surface Temperature (LST) series from Thermal InfraRed (TIR) sensors on board polar orbiting or geostationary satellites has usually been based on the application of Split-Window (SW) techniques. SW algorithms over land also require as input a correction for the surface emissivity, usually estimated from vegetation indices or classification-based approaches using Visible and Near-Infrared (VNIR) bands. These algorithms have systematically been used because of the dominant spectral configuration of most low-resolution Earth Observation sensors, with only two bands in the 10.5-12 µm spectral region.
However, LST retrieval from SW algorithms and surface emissivity estimations from classifications and/or vegetation indices may be problematic in some landscapes because emissivity of land surfaces is heterogeneous and is dependent on many factors such as soil moisture and surface compositional changes which are not characterized by land cover maps. A reduction in LST uncertainty due to improved emissivity knowledge could be beneficial for long time series data if the accuracy of the joint retrieval of temperature and emissivity could be verified.
In the framework of ESA LST Climate Change Initiative (CCI) project we propose the application Temperature and Emissivity Separation (TES) method that combine TIR data in different spectral bands, providing both LST and Land Surface Emissivity (LSE) by solving the radiative transfer equation and thus reflecting the real conditions of the surface. The work proposed here is complementary to other efforts currently taken by other entities such as NASA. To do so, the whole Moderate Resolution Imaging Spectroradiometer (MODIS) database will be processed applying the TES algorithm in order to generate LST and LSE Essential Climate Variable (ECV) products which can be useful for global trends and for local-scale LST climate applications, such as urban areas, agricultural land, or semi-arid areas. An additional benefit of the computed LSE product is the retrieval of global maps which can be used as input in the classic SW algorithms for generation of long-term LST series.
Finally, these retrievals will be compared to the classical SW approaches and also validated using in situ measurements to assess their feasibility and performance. Other global TES products will be also included in the comparison.
Radiometric surface temperature (Tr) obtained from thermal infrared (TIR) remote sensing is routinely used as a surrogate for aerodynamic temperature (T0) in surface energy balance (SEB) models used for mapping evaporation (E) and sensible heat (H) fluxes. However, the relationship between the two temperatures is both non-unique and poorly understood. While Tr corresponds to a weighted soil and canopy temperature as a function of radiometer view angle, T0 represents an extrapolated air temperature profile at an ‘effective depth’ within the canopy at which the sensible heat flux (H) arises. This depth is often referred to as the ‘source-sink’ height of the canopy, and at this point Tr and T0 can differ by several degrees. As a result, using them interchangeably could lead to large errors in evaporation flux estimates, particularly in arid and semiarid climates. The most common approaches adopted in the SEB models to accommodate the inequality between Tr and T0, such as the ‘kB-1- excess resistance’ approach used in one-source models and contrasting empirical parameterizations of aerodynamic conductance in two-source models to segregate the soil-canopy component temperatures very often questions their theoretical soundness.
The present study uses the analytical evaporation model, STIC (Surface Temperature Initiated Closure), to demonstrate a direct retrieval method for T0 that enables an investigation of the aerodynamic versus radiometric surface temperature paradox for a broad spectrum of ecohydrological regimes. T0 retrieval through STIC forced with Tr from ESA CCI+ land surface temperature products and in-situ meteorological datasets, were evaluated against an inverted T0 retrieved from direct flux observations in water-limited (arid and semi-arid) and energy-limited (mesic) ecosystems in Australia from 2011 to 2018.
Comparison of STIC T0 versus inverted T0 revealed a significant positive association (correlation coefficient, r = 0.76 - 0.88, p-value < 0.05) with a heteroscedastic pattern in the semiarid ecosystems, and the differences between the two consistently increased with increasing H. The difference between STIC-derived T0 and inverted T0 was significantly correlated with the product of wind speed and surface air temperature difference in the arid and semiarid ecosystems (r = 0.40 - 0.50, p-value < 0.05). This implies that assuming a constant kB-1 does not adequately capture the expected variation in flux-inverted T0. Arid and semiarid ecosystems with declining canopy-stomatal conductance and evaporative fraction response to increasing atmospheric vapor pressure deficit (VPD) led to an increase in sensible heat flux and simultaneous increase in aerodynamic conductance and air temperature (Ta). Any strong vegetation-atmospheric coupling due to high aerodynamic conductance restricts the Tr-Ta difference, which is compensated through increasing T0, thus increasing the Tr-T0 differences. Our study indicated the possible existence of biophysical homoeostasis depending on the canopy-stomatal conductance response to VPD and vegetation characteristics, and the inequality of T0 versus Tr is thought to be evolved largely as a consequence of homoeostasis for a given fractional canopy cover. The reshaping of the Tr-Ta difference due to the surface temperature homoeostasis is a thermoregulation mechanism of vegetation for surviving in water-scarce environments.
A method of detection of the volcanic ash from C band from Sentinel-1 satellite data.
Among the natural hazards, volcanoes represent one of the most dangerous for both people and the surrounding environment. Hundreds of eruptions are recorded each year, often putting people in serious risk and causing enormous economic and environmental damages. Mt.Sakurajima is active volcano in Japan. The main aim of this study is to detection of the volcanic ash spatial pattern distribution, in addition, to study the relationship between the existing ash around volcanic area using the spectral indicators Normalized Difference Vegetation index (NDVI) and Land Surface Temperature (LST) based on Landsat 8 satellite data in Mt.Sakurajima volcano of Japan. A technique for improved detection of volcanic ash has been developed that uses utilizes the coherence of the interferometric pairs of C-band from Sentinel-1, NDVI and LST from Landsat-8 satellite. In addition, investigated the multi-temporal approach in order to accurately map the volcanic ash wind can caused the decorrelation. We have statistically analyzed the temporal behavior of coherence and identify the anomaly. Results are encouraging for the future development of a new empirical model, in combination with data from forecasting models.
The passive microwave can be used to gather information on the atmosphere and the Earth surface that have proved to be very valuable, especially under cloudy skies.
These observations are assimilated as part of global numerical weather prediction systems, to constrain surface geophysical parameters or estimate atmospheric profiles. They can also be used to directly estimate various parameters such as land surface temperature, especially for 'all-weather' estimates.
In these different usages, a contribution from the surface has to be taken into account in the radiative transfer equation.
In most situations where the observations are performed above moist soils, the layer contributing to the microwave signal can be considered to be a skin layer, comparable to the one seen by infrared instruments in clear sky.
However, in some arid areas, the radiation penetration depth can be larger than a wavelength. Indeed, the attenuation of the microwave radiation described by the soil dielectric properties can be very low, especially at lower frequencies. Therefore, the emitting layer of the microwave signal can be deeper in the sub-surface.
The diurnal variation of land surface temperature is propagated by conduction in the sub-surface. This heat propagation is described by the Fourier diffusion equation and leads to difference up to 20 K between the surface and the emitting depth layer temperature.
This discrepancy has been noticed by different studies comparing the land surface temperature estimated by microwave and the ones estimated by infrared observations.
The Global Precipitation Mission Microwave Imager (GMI) is a passive microwave imager with channels between 10 and 183 GHz. Its main difference with other microwave imager is the non-sun synchronous orbit, that provides measurements of the Earth surface at all time of day. Multiple observations can be combined to create diurnal cycles of brightness temperatures.
A method combining the observed microwave brightness temperatures, the soil temperature profile and the atmospheric contribution can be used to simultaneously estimate the emissivities in both polarizations and the emitting depth at different frequencies. The modelisation of the soil temperature profile relies on a prescribed land surface temperature diurnal cycle, and the atmospheric contributions are derived from temperature and humidity atmospheric profiles, both based on the ERA5 reanalysis data. These are collocated with the diurnal cycle of brightness temperatures observed by the GMI instrument over its lifespan (2015-2021).
Using this dataset, monthly estimates of the emitting depth and the emissivities have been obtained over arid areas on a global scale.
These estimations of the emitting depth at frequencies between 10 and 89 GHz can be used to build dielectric properties maps of arid areas, providing new insights on the spatial distribution of some geological features such as sandy areas. These results can be used to derive a correction of the difference between the emitting depth temperature and the skin temperature at any time of the day between 10 and 90 GHz for all arid areas.
This correction could be useful to make the estimates of land surface temperature based on passive microwaves more reliable over arid areas.
Forests can decrease and buffer extreme daytime surface temperature through evapotranspiration and other pathways of conversion and storage of solar energy. Buffering of extreme temperatures in both urban and rural areas is an ecosystem service, benefiting both human wellbeing and wildlife habitat suitability. However, little is known about how restoration of forests affects the rate, timing, and amount of temperature buffering.
Our study assessed the effects of forest restoration on the rate, timing, and strength of thermal buffering capacity for two groups of restoration sites in Southern Ontario, Canada. The two groups of sites were ~130 km apart and restored from agriculture towards forest by two different organizations between 2007 and 2019. We used 29 Land Surface Temperature (LST) and 9 Evapotranspiration (ET) image data products from the ECOSTRESS thermal imager captured during the 2020 growing season. ECOSTRESS is useful for monitoring restoration and conservation projects with higher temporal frequency and variation than what is available from Landsat satellites. Many of these sites and projects are also too small and fragmented for the spatial resolution of MODIS and Sentinel thermal imagers. We compared restoration sites (total n = 43) with paired mature forest sites (n =43), representing the post-restoration state, nearby agriculture sites (n = 20), representing the pre-restoration state, and suburban residential sites (n = 8), representing a common alternative land-use for abandoned farmland. Temperature measurements for all site types were taken relative to that of the largest protected mature forest in the area.
We found that the temperature difference between all site types peaked in the early afternoon (1 – 3 pm) for both groups of sites. We found significant differences between restoration sites and agriculture and residential sites for both groups of sites. Between 12 and 4 pm, restored sites were 8.3 ± 3.8 ℃ cooler than residential sites and 5.4 ± 2.3 ℃ cooler than agricultural sites. Mature forests and restoration sites were not significantly different in both groups of sites. Temperature variability over the 24-hour diurnal day, measured as standard deviation (s) relative to a large, protected forest control site, was not significantly different for mature forests (s = 1.6 ± 0.8 ℃), and restoration sites (s = 2.7 ± 1.1 ℃). In contrast, restoration sites and mature forests did have significantly less variability in relative diurnal temperature than agriculture (s = 4.4, ± 1.2 ℃) and residential sites (s = 4.3 ± 1.3 ℃). We found that daytime temperature decreased significantly, by 0.1 ℃, or 3.1 %, per year since restoration for one of the groups of sites relative to nearby mature forest sites. We also characterized the absolute and relative ET dynamics of sites in one of the groups of sites. We found that younger restoration sites have a higher overall ET than older ones, with a significant daytime relative instantaneous ET decrease of 0.8 W/m2, or 5 %, per year for sites 1 to 14 years old.
Improving our understanding of the timing and capacity of restored forests to buffer extreme temperatures is essential to better utilize and promote forest restoration as a tool for local climate change adaptation. Creating a semi-automatic GIS tool for restoration and conservation managers to monitor and assess changes in thermal buffering at their sites, based on ECOSTRESS and other thermal imagery data, would provide another strong and easy-to grasp argument when reporting to funders, and when in public outreach.
The NASA Surface Biology and Geology (SBG) mission slated for launch in early 2028 is a core component of NASA's new Earth System Observatory to improve our understanding of vegetation processes, aquatic ecosystems, urban heat islands and public health, snow/ice, and volcanic activity. SBG will include both a visible to shortwave infrared spectrometer (VSWIR) and an Infrared radiometer including two midwave infrared (MIR: 3-5 micron) and five thermal infrared (TIR: 8-12 micron) bands. Here, we leverage the SBG bands for three key objectives: (1) To evaluate the performance of a suite of algorithms for detecting high-temperature phenomena (hotspots) such as lava flow and wildfires at a spatial resolution of 60 m. (2) To model lava/fire properties such as temperature distribution, area, and Fire/Volcano Radiative Power at a sub-pixel scale. (3) To examine how the inclusion of the 4.8 micron MIR band can improve the detection of temperatures (and the corresponding hot area fraction) within the 500-800 K range.
We approach this by modeling the at-sensor SBG radiances using the spectral response functions and instrument noise model combined with high-resolution airborne data from HyTES over two sample sites using MODTRAN. The following regions form our sample sites: (a) A small fire in Arizona with a thermal range of 400-800 K, and (b) Lava flow on Kilauea, Hawaii encompassing a thermal range of 600-1200 K. We then apply three types of algorithms to estimate sub-pixel lava/fire temperature and the corresponding fraction. First, we test the Normalized Temperature Index (NTI) method and determine the NTI detection thresholds for SBG. Second, we implement well-established dual and multi-component modeling algorithms where we solve the Planck Curve for different combinations of VSWIR, MIR, and TIR band observations to compute sub-pixel thermal components (temperatures) and their fractional areas. Third, we also test the multiple endmember spectral mixture analysis (MESMA) algorithm to determine the relative areas of predetermined thermal components. We conclude by comparing the accuracy of each algorithm in replicating the sub-pixel thermal distribution and quantifying their limitations at different noise levels.
The rate at which global climate change is happening is arguably the most pressing environmental challenge of the century and it affects our cities. Temperature is one of the most important parameters in climate monitoring and Earth Observation (EO) systems and the advances in remote sensing science increase the opportunities for monitoring the surface temperature from space.
The EO4UTEMP project examines the exploitation of EO data for monitoring the urban surface temperature (UST). Large variations in surface temperatures can be observed within a couple of hours, particularly when referring to urban surfaces. The geometric, radiative, thermal, and aerodynamic properties of the urban surface are unique and exert particularly strong control on the surface temperature. EO satellites provide excellent means for mapping the land surface temperature, but the particular properties of the urban surface and the unique urban geometry in combination with the trade-off between temporal and spatial resolution of the current satellite missions impose the development of new sophisticated surface temperature retrieval methods particularly designed for urban areas.
EO4TEMP exploits multi-temporal, multi-sensor, multi-resolution EO data for UST retrieval at local scale (100 m), capable of resolving the diurnal variation of UST and contribute to the study of the urban energy balance. In the first phase of the EO4UTEMP project implementation, information from multi-source satellite data was used to estimate parameters related to the geometric, radiative, thermal, and aerodynamic properties of the urban surface. Very high spatial resolution imagery (SPOT5) was used for deriving static land cover fractions. Very high resolution Digital Surface Models (DSMs) were used to derive the 3D (3 dimensional) city information. Parameters like the sky-view factor, the canyon aspect ratio, the plan are and the frontal area index were derived. The impact of those parameters to UST was assessed using time series LST (land surface temperature) and emissivity products from ECOSTRESS. The findings from the EO4UTEMP project will be used to improve the emissivity estimation for accurate UST estimations from high spatial resolution missions. Downscaling approaches will be then applied to retrieve accurate UST from low spatial resolution missions to achieve high spatio-temporal UST.
Land Surface Temperature (LST) is a parameter related to multiple Earth surface processes. For example, some of the most well-known applications are used on vegetation studies aiming to understand the role of LST in the evapotranspiration process; or on studies related with surface temperatures of the oceans (Sea Surface Temperature – SST) trying to comprehend the energetic exchanges between oceans and atmosphere and its impact on the climates. This parameter is often analyzed through ground data but can be also observed by means of remote sensing data. The LST monitoring by remote sensing data is also of great importance in the cryosphere field, as it allows us to better understand the energy exchanges between the atmosphere and the snow/glaciated surfaces of Earth on a larger scale. Snow surfaces have a relevant role in the global energy balance of the planet since due to its bright color it reflects a large extent of the incident radiation to the atmosphere, thus avoiding the fast melting of the mountain glaciers, seasonal snow, polar ice caps and sea ice.
Snow processes and metamorphosis are very sensitive to air temperature changes. A variation from 0° to 1° C can trigger the beginning of the snow melt. During this melting process snowflakes undertake changes in grain size and shape and thus the capacity for reflecting the incident radiation, which means it changes the albedo. In this sense, studying the relation between LST and snow grain size help us to better understand in which way the variation of these two parameters is correlated.
As many studies in the past have demonstrated, snow albedo is a very relevant parameter for many earth processes, as Earth energy balance. At the hydrological basin level, it can influence the conditions and timing on which snow releases fresh liquid water during melting season. Thus, its accurate knowledge is extremely important to better understand many subsystems depending on the seasonal snow cycle as the vegetation, fauna but also many economic sectors such as hydropower and agriculture.
Within the frame of the ESA Alpine Regional Initiative project AlpSnow (2020-2022), we aim at developing snow albedo and snow grain size retrieval methods using two different approaches proposed by Painter et al. (2009) and Kokhanovsky et al. (2019). The first method is an empirical approach based on spectral indices, and the second method is a physical approach. Both approaches are applied to Sentinel-3 OLCI satellite data. To test both algorithms, a short timeseries has been analyzed from the beginning of the 2018 hydrological year until the melting season of 2021. For grain size, the results from the comparison between ground data and satellite estimates indicate a high representativeness of the class with low grain size values. This is especially evident in the months January and February. In these months, the in-situ measurements also show large grain size in exceptional dry snow conditions. Indeed, it is known that the snow temperature gradient can change shape and grain size where mass transfer from warmer to colder grains causing grain growth, typically forming faceted and surface hoar grains (Colbeck, 1983; 1989). In March, the snow grain sizes are quite variable, without any clear trends, while in April, satellite estimates show a high percentage (around 85%) of high values of grain size. To further assess the behavior of snow grain size, the satellite estimates were compared with LST obtained from both ground measurements (available from snow pits) and satellite imagery (MODIS and ECOSTRESS). The comparison indicates a strong relationship of the grain size evolution from winter to spring with LST changes, thus clearly revealing the aging process (as shown in Figure 1).
In this direction, LST can be seen as relevant parameter for understanding the snow grain size metamorphism (and consequently albedo changes) and due to this strong relationship as a kind of predictors in the evolution of the snowpack especially during the melting phase.
In the presentation, we will present the results obtained by the two proposed algorithms for albedo and grain size by exploiting Sentinel-3 OLCI imagery from 2018 to 2021. Moreover, we will show and discuss the correlation of grain size variability in relation with LST on both temporal and spatial scales.
References:
Colbeck, S. C. (1983): Theory of metamorphism of dry snow. J. Geophys. Res. 88, 5475–5482.
Colbeck, S. C. (1989): On the micrometeorology of surface hoar growth on snow in mountainous area. Boundary Layer Meteorol. 44, 1–12.
Kokhanovsky, A., M. Lamare, A. Danne, C. Brockmann, M. Dumont, G. Picard, L. Arnaud, V. Favier, B. Jourdain, E. Le Meur, B. Di Mauro, T. Aoki, M. Niwano, V. Rozanov, S. Korkin, S. Kipfstuhl, J. Freitag, M. Hoerhold, A. Zuhr, D. Vladimirova, A.-K. Faber, H.C. Steen-Larsen, S. Wahl, J.K. Andersen, B. Vandecrux, D. van As, K.D. Mankoff, M. Kern, E. Zege, and J.E. Box (2019): Retrieval of Snow Properties from the Sentinel-3 Ocean and Land Colour Instrument. Remote Sens., 11, DOI:10.3390/rs11192280.
Painter, T.H., K. Rittger, C. McKenzie, P. Slaughter, R.E. Davis, and J. Dozier (2009): Retrieval of subpixel snow covered area, grain size, and albedo from MODIS. Remote Sens. Environ., 113(4). DOI:10.1016/j.rse.2009.01.001.
Since Land Surface Temperature (LST) is a key variable for monitoring the Earth climate system, the World Meteorological Organization regards it as an essential climate variable. The Global Climate Observing System (GCOS) recommends an uncertainty threshold for satellite-retrieved LST of ±1 K for accuracy (i.e. systematic uncertainty) and ±1 K for precision (i.e. random uncertainty).
The Sea and Land Surface Thermal Radiometer (SLSTR) is on board the Sentinel-3A and Sentinel-3B spacecrafts, which were launched in February 2016 and April 2018, respectively. Here we propose an explicitly angular and emissivity-dependent split window algorithm (SWA) for LST retrieval from SLSTR data. The SWA coefficients were obtained using the Cloudless Land Atmosphere Radiosounding (CLAR) database and the retrieved LST and the Sentinel-3A SLSTR LST operational product (baseline collections 003 and 004) were validated over the Valencia rice paddy site against in-situ LST measurements acquired between 2016 and 2020.
Due to rice phenology, over the year the validation site changes its land cover, i.e. it exhibits three different homogeneous land cover types: flooded soil in December, January and June; bare soil in February and March; and full vegetation cover from July to September. Thus, the rice paddy site allows us to validate the proposed SLSTR SWA over three different land cover types. An LST validation station at the site continuously records radiometric measurements. The station is equipped with two SI-121 Apogee radiometers, one looking downwards and one looking upwards; the latter is required to obtain downwelling hemispherical radiance. The SI-121 radiometer measures radiance within the 8 – 14 μm spectral range and, based on manufacturer and at-laboratory calibrations, has an uncertainty of ±0.2 K over the relevant temperature range.
The proposed algorithm uses SLSTR radiances in the channels at 11 and 12 μm as well as water vapor content, which are both provided in the SLSTR Level 1 product (baseline 003). Furthermore, for this validation exercise we used known in-situ emissivities for each land cover. The validation results for the SWA LST showed an overall accuracy of -0.4 K and a precision of 1.1 K (median and robust standard deviation, respectively). For each surface, the accuracy (precision) was 0.0 K (0.6 K) for flooded soil, -0.2 K (0.9 K) for bare soil and -0.7 K (1.2 K) for full vegetation. For the same period, the operational SLSTR LST product had an overall accuracy (precision) of 1.3 K (1.3 K). Therefore, over the rice paddy site the explicit angular and emissivity-dependent SWA met the GCOS accuracy threshold of 1 K for the three land covers, while the precision threshold was met for bare soil and flooded soil. Our results agree with previous studies, e.g., Yang et al. (2020) and Zhang et al. (2019), in which LST retrieved with emissivity-dependent SWAs also performed better than the operational SLSTR LST product. However, the SWA proposed here is emissivity-dependent as well as angle-dependent; thus, the atmospheric effects for large viewing angles are better represented.
A period of exceptionally heavy rainfall across many parts of East Africa from late 2019 to early 2020, followed by above average rainfalls throughout 2020, triggered devastating floods destroying livelihoods and displacing millions of people across the region and lead to a significant rise of water levels for several East African lakes.
South Sudan is perhaps the country hardest hit, severely affected by sustained flooding for more than two years now, exacerbating an already complex humanitarian emergency with an estimated 60% of the population facing acute food insecurity. Hydrology of South Sudan is complex and several factors determine the spatial distribution and duration of flooding (or drought) in the country for any given season. The flat topography and the extensive floodplains made up of vertisol soils (virtually impervious following torrential rains), render substantial portions of the country prone to pluvial flooding. Furthermore, the White Nile and its tributaries, flowing through vast wetlands, can cause substantial riverine flooding often obscured by vegetation, which can cause under-detection of flood and wetland affected areas when conventional methods based on optical or SAR satellite data.
In support of humanitarian decision making, analysis based an innovative processing of thermal data is performed to track the flood and wetland situation since 2019 over the full country with high temporal frequency. The full archive of MODIS Aqua thermal data is processed using a pixel optimized smoothing and gap-filling to derive dekadal flood and wetland extents by employing a dynamic thresholding technique. Combined with data thermal data from Sentinel-3 for synoptical monitoring, and complimented by multi-spectral data from Sentinel-2 and Landsat-8, and SAR data from Sentinel-1, the LST based analysis enabled timely and detailed analysis at various scales. In addition, analysis of the history of seasonal flooding was carried out, highlighting the uniqueness of the current flood episode.
The increased use of land surface temperature (LST) in the assessment of energy and water transfers between Earth’s surface and atmosphere has driven the development of ever more accurate estimations of LST by satellite observations. Numerous algorithms have been developed over the years, with diverse solutions to account for land surface emissivity and atmospheric effects on the LST estimation. However, one type of atmospheric corrections still in need of improvement is regarding the effects of aerosols. Although this effect is much less significant than that of water vapor, in the case of heavy aerosol loading the atmosphere transmissivity in the region 10-12 µm (a range typically used for LST retrieval) decreases considerably, which presumably affects LST estimation.
This study serves to analyse the impact of heavy dust aerosol loading on satellite LST retrievals, by comparing SEVIRI and MODIS (MxD11 and MYD21) LST products with ERA5’s skin temperature (SKT) across the Saharan desert, where abundant seasonal dust production and transport occurs. Reanalysis usually manifest a cold daytime bias when compared to satellite observations, however, we show that the bias inverts to a marked warm bias in the studied area during summer months, in concurrence with the highest dust aerosol concentrations in ECMWF’s fourth generation atmospheric composition reanalysis (EAC4). Considering that the high dust aerosol concentrations should not impact ERA5’s SKT, this result indicates that the sensor-algorithm combinations analysed underestimate LST under conditions of heavy aerosol loading and thus need improvements regarding this atmospheric effect.
This analysis was complemented with comparisons against in situ measurements of LST in two locations in the southern region of the Saharan desert (Niamey, Niger during 2006 and Dahra, Senegal from 2009 to 2013). These provide additional evidence for the underestimation of satellite-based LSTs with higher dust aerosol loading.
Finally, detailed examination of the SEVIRI brightness temperatures used for the LST estimation reveals that the aerosol loading seems to affect the distribution of the brightness temperature differences between the 10.8 and 12 µm channels, which in turn has a significant impact on the atmospheric correction performed by the algorithms. This work was performed within the framework of EUMETSAT’s Satellite Application Facility on Land Surface Analysis (LSA-SAF) with the purpose of improving current LST retrieval methods.
This work concerns the feasibility study for a new EO multispectral space sensor, operating in the medium infrared, designed for applications on high temperatures events. The study was carried out in the framework of the ASI (Agenzia Spaziale Italiana) project SISSI (Super-resolved Imaging Spectrometer in the medium Infrared), aiming to improve the ground spatial resolution and mitigate saturation/blooming effects. The MWIR (Middle Wave Infra-Red) spectral region is crucial for several applications, ranging from biomass burning to geophysical phenomena. Multispectral observations in the MWIR are relevant for monitoring natural and anthropogenic hazards, in particular when performed at high spatial resolution. The SISSI payload is composed of 5 spectral channels in the range 3-5 µm with a GSD (Ground Sampling Distance) of about 15 m. Specifically, the channels are placed at 3.3, 3.5, 3.7, 3.9 and 4.8 µm with a FWHM (Full Width Half Maximum) in the range 100-200 nm. The SISSI study could bring significant contributions to different scientific challenges: fire front and active burning areas analysis; detection of trace gases emitted to the atmosphere by biomass burning; flaring events analysis; hot spot temperature estimation of lava flows; gas detection from volcanic summit craters; detection and retrieval of greenhouse gases (CH4 and CO2) by the exploitation of gases absorption bands at 3.3 and 4.8 µm. Moreover, since the available satellite sensors operate mainly in VNIR-SWIR (Visible and Near Infra-Red and Short Wave Infra-Red) and TIR (Thermal Infra-Red) spectral regions, SISSI payload could offer the possibility to extend the data acquisition to the MWIR spectral region. In particular, the SISSI project study wants to contribute to the following Scientific Challenges, defined in the document “ESA’s Earth Observation Science Strategy” (2015, ESA SP1329/1) in the framework of the “ESA’s Living Planet Programme” (2015, ESA SP1329/2):
Challenge A2 – Interaction between the atmosphere and Earth’s surface involving natural and anthropogenic feedback processes for water, energy and atmospheric composition;
Challenge L1 – Natural processes and human activities and their interaction on the land surface;
Challenge G1 – Physical processes associated with volcanoes, earthquakes, tsunamis and landslides in order to better assess natural hazards, volcanic thermal modelling and precursor analysis.
THERMOCITY : urban thermography from space
Abstract :
Satellite imagery is used to regularly measure the surface temperature of a city or urban area. THERMOCITY aims at studying urban heat islands (in summer) and heat loss (in winter) through the development of an urban thermography analysis tool based on satellite data to provide comprehensive information to city manager. A first phase is dedicated to the processing of thermal imagery and a second to their interpretation, with a constant involvement of the final users.
The first phase of the project involves identifying and improving recent spatial thermal data in our regions of interest, five main French metropolises: Marseille, Montpellier, Paris, Strasbourg and Toulouse. A dataset of about 10 images per city, based on ASTER and ECOSTRESS acquisitions, has been constructed. Particular attention has been paid to improve the geolocation and optimize the emissivity/temperature separation based on the specific characteristics of the urban environment. This includes establishing levels of uncertainty for all the products generated. One major problem of urban thermography from space is its limited spatial resolution for our aims. Advanced analysis techniques are therefore applied to thermal images, which are combined with higher resolution optical ones, in order to improve their definition. All the products generated in the frame of THERMOCITY are openly available through the French land data centre: THEIA.
The second phase of the project focuses on thermography data exploitation. Concerning heat losses, a dedicated tool is created to characterize thermal signatures of known buildings, while a blind search is also performed in parallel to look for unexpected heat losses. The second major subject concerns urban heat islands, which is more difficult to understand and characterise than heat losses. Two approaches are used: surface urban heat island observation thanks to the thermal images and surface and air urban heat islands modelling with an urban climate model. A cross analysis of these two types of products is carried out in order to understand their advantages, disadvantages and their potential synergy with the final objective of their relevant use for urban planning.
Water bodies, such as lakes or large reservoirs, are considered of importance in the context of global change and they are sensitive to climatological and meteorological conditions. Water temperature is one of the main parameters for determining ecological conditions, influencing chemical and biological processes within a lake. Earth observation plays an important role in assessing and monitoring the water characterization parameters like height, extent, and radiance; hence, this study will be focused on the analysis of the lake surface water temperatures (LSWT), considering the Issyk-Kul Lake (Kyrgyzstan, Central Asia), a very large (6,236 km2) and very deep (down to 668 m) lake as a study case.
Time series analysis of annual (2019/2020) variation of LSWT was also carried out exploiting Sentinel-3-SLSTR, a medium resolution sensor (1km), and the ECOsystem Spaceborne Thermal Radiometer on the International Space Station (ECOSTRESS), with a high resolution of ~70 m native resolution. Due to cloud coverage, exploitability varies from a few to tens of images per month. In addition, cloud masks for reducing LSWT values outside the range within the lake will be applied for a better representation of the lake.
Area of Interests (AOI’s) were arbitrarily defined on the lake surface roughly along the central West-East line, and in accordance with the availability of the scene for each date to highlight the potential tendency of the temperature spatial distribution. Sentinel 3 in 2019 and 2020 has shown that during winter the LSWT is relatively homogeneously distributed, while in summer there were more slightly ups and downs values for temperatures. For this tendency, the minimum temperature of ~4°C is observed in January/February, with a rapid increase starting from March to May, reaching the maximum temperature of ~23°C in August, and then dropping constantly and rapidly around 10 degrees in October.
A validation campaign in the Issyk Kul Lake was carried out from the 05th to the 8th of October 2021, using a Torrent Board carrying sensors to measure humidity, air, and skin water temperatures in the Issyk Kul Lake, where the temperatures registered during the day on those dates (10hrs – 17hrs) were generally from 13°C to 16°C. Sentinel 3 has shown a slight difference of 0.3°C against the Torrent board sensors.
ECOSTRESS, unfortunately has just provided datasets from June to December in 2019, showing the maximum temperature in August with 22°C, and 8 °C as the minimum in December. LSWT analysis will be done for ECOSTRESS in 2020 to conclude the intercomparison and observe the LSWT variability in the lake for these two years. Therefore , the first intercomparison between Sentinel 3 and ECOSTRESS has shown, after using cloud masks for each date and product, LSWT were lower at each date for ECOSTRESS, with differences from 1°C to 3°C.
Brazilian savanna, known as Cerrado, is the second-largest Biome in Brazil. The deforestation and degradation caused by the expansion of agriculture and livestock have promoted a severe loss in the biodiversity, and an increase of the fragmentation process in this area, as only about 3% of this ecosystem is fully protected in restricted conservation units. In this context, remote sensing techniques together with landscape analyses can provide insights to support the understanding of the landscape fragmentation process and how this can affect the changes in biomass over time. In this study, we are testing the hypothesis that changes in landscape metrics including aggregation metric, area and edge metric, diversity metric and shape metric, (even small changes in this metrics), can have negative impact on aboveground biomass (AGB) or carbon stock loss over time. To test this hypothesis, we selected the Rio Vermelho watershed where due to the intense historical fragmentation process the native vegetation is still facing some vegetation loss over time considering these remaining fragments. This area is composed by agriculture areas with native vegetation fragments of grasslands, savanna and forest formations of Brazilian Cerrado biome. We combined field inventory and LiDAR (light detection and ranging) data to estimate AGB and landscape metrics to analyze changes in the landscape between 2014 and 2018. The relationship between landscape metrics and AGB were evaluated using the random forest (RF) model. Our results show that the local ecosystem dominated by forests and savanna formations presented a considerable vegetation loss. Between 2014 and 2018, the average AGB loss of forest, savanna and grassland in the area reached more than 20%. Among them, the AGB of savanna is the most obvious, and the loss of biomass has reached a staggering 32%. The RF analyses showed that the landscape metrics (Mean of patch area, Coefficient of variation of patch area, Mean shape index, Mean of related circumscribing circle, Shannon’s diversity index and Shannon’s evenness index) explain about 11.07% of the changes in AGB. Mean shape index (%IncMSE = 48.1) and Shannon's evenness index (%IncMSE = 47.75) have the most apparent impact on AGB. This result shows that most of the AGB loss is occurring in the remaining native vegetation fragments. Therefore, its impact is more severely affecting the local scale dynamics as consequence of the degradation within fragments and losing the connectivity among these fragments overtime.
The mineralization of soil organic carbon (SOC) to carbon dioxide (CO2) is a key component of the carbon cycle. However, knowledge about patterns of SOC mineralization in space, time and depth is still limited. Within the project Carbon4D we aim to develop a nearly real-time monitoring of SOC mineralization in space, time and depth of the Fichtel Mountains, which is a low mountain landscape located in the south of Germany. Soil temperature and soil moisture are undoubtedly two main drivers for the mineralization of SOC. As a general pattern, the SOC mineralization increases with temperature whereas extremes of soil moisture (very wet or very dry) result in a decrease. Hence, to understand and model the spatial and temporal patterns of SOC mineralization across the landscape, we first aim here to investigate patterns of soil temperature and soil moisture. Ground truth data of soil temperature and soil moisture are obtained by 15 soil probes. The sensors measure both parameters in 10 cm increments down to one meter depth in a 30-minute resolution. To measure at several sites in the 400 km² large study area, the probes are shifted monthly. Thereby, around 300 different sites are captured. To model soil temperature and soil moisture continuously in space, time and depth, we apply machine learning algorithms, where the measurement data are related to remote sensing data, soil and topographic information as well as meteorological data. By this approach, the main drivers for soil temperature and soil moisture, as well as their patterns in all four dimensions are then identified and analysed. In this conference contribution we present and discuss first results of this modelling approach and the controlling factors of soil temperature and soil moisture. The predictions of soil temperature and soil moisture patterns will be employed in the future to model and study the patterns and controlling factors mineralisation of SOC.
Carbon captured via photosynthesis by vegetation is known as gross primary production (GPP). GPP is one of the main processes driving climate regulation as well as being an important proxy for a range of ecosystem services, including food production, fibre and fuel. It is routinely estimated at global scales using different operational algorithms combining remotely-sensed data from medium spatial resolution sensors and ancillary meteorological information. However, there is an urgent need of operational global scale GPP products at finer (30 m) spatial resolution to better resolve plant community scale dynamics. High spatial resolution satellite information requires consistent mosaics and long time series, which are often plagued by data record gaps due to cloud contamination, radiometric differences across sensors, scene overlaps, and their inherent sensor noise. In order to overcome these constraints, we have fused spectral data from Landsat and MODIS using the HIghly Scalable Temporal Adaptive Reflectance Fusion Model (HISTARFM) algorithm: the method produces monthly gap free high resolution (30 m) surface reflectance data at continental scales with associated well-calibrated data uncertainties. This allows us to carry out an uncertainty analysis considering both aleatoric uncertainty (data error) and epistemic uncertainty (model error) jointly. Combining monthly high resolution data with daily meteorological information, along with in-situ eddy covariance GPP estimates leads us to be able to create accurate and continuous high spatial resolution GPP estimates and their corresponding uncertainties over large areas (Europe, the contiguous US, and the Amazon basin) using both empirical and machine learning approaches. The processing pipeline is implemented in the Google Earth Engine to produce high resolution, long time series of continuous GPP estimates across very broad spatiotemporal scales. The methodology enables more precise carbon studies and understanding of land-atmosphere interactions, as well as the possibility of deriving other carbon, heat and energy fluxes at an unprecedented spatio-temporal resolution.
Passive microwave vegetation optical depth (VOD) has been increasingly used for global vegetation monitoring in the last decade. It has, for example, been used to monitor global changes in phenology, vegetation health, vegetation water content/iso-hydricity, and biomass in time and space.
Compared to optical-based satellite vegetation data, VOD has a higher temporal frequency. The higher revisit times of the wide-swath satellite sensors, the independence of solar illumination and the limited sensitivity to cloud cover can strongly increase the data coverage in some areas, although at the cost of higher retrieval errors and lower spatial resolutions.
Therefore, VOD has recently been used as proxy for optical-based leaf area index (LAI) in regional data assimilation studies using land surface models (LSMs). The studies showed that VOD assimilation can improve both carbon-related and water-related land surface variables, like gross primary production (GPP), evapotranspiration (ET), or root zone soil moisture.
Current studies in the literature only consider the effect of biomass or LAI on VOD. However, it is well known that VOD is mainly sensitive to absolute vegetation moisture content. In the last few years, the effect of relative vegetation moisture content variations on VOD has been coming more into focus.
We therefore assimilated VOD into the Noah-MP LSM using a novel approach that takes not only dry biomass variations, but also vegetation moisture content variations into account. This is accomplished with an empirical model of VOD as a function of dynamically simulated LAI, soil moisture, and vapor pressure deficit as an observation operator.
We evaluate this novel approach by assimilating X-band VOD from the VOD Climate Archive into the Noah-MP LSM over Europe in the years 2002 to 2010 at a 0.25° model resolution. The results are compared to an assimilation of X-band VOD using an approach from the existing literature, and to an assimilation of optical LAI from the Copernicus Global Land Service (CGLS), focusing on the improvements made in the representation of vegetation-related state variables and fluxes in the resulting dataset, especially GPP and ET.
National inventories of anthropogenic greenhouse gas emissions and removals are annual at best, uncertain, and miss essential components of the full national budgets. The assessment of the global CO2 budget by the Global Carbon Project is annual for the previous year (Friedlingstein et al. 2021) and only provides national details for fossil emissions. The global CH4 budget was analyzed at a four-years interval and extends until 2017 (Saunois et al. 2020). The first N2O budget was produced last year until 2018. In the wake of the COVID pandemic, emissions are now rebounding. Yet, green stimulus packages and enhanced pledges should deliver significant emissions reductions and enhanced carbon storage in some regions. Therefore, emissions and sinks of greenhouse gases are expected to change rapidly in the coming years with contrasting trends between countries. To effectively monitor the fulfillment of emission reduction pledges in each country, more frequent observation-based assessments of national greenhouse gas budgets are needed to support national inventories. In addition to detailed coverage of managed lands, which are surveyed by inventories, complementary knowledge of natural fluxes over unmanaged lands and the oceans is also required to unambiguously reconcile the foreseen reductions of anthropogenic emissions with the observed growth rates of greenhouse gases in the atmosphere, and assess the risk of missing climate targets, e.g. if natural sinks weaken in the future. I show in this presentation that existing systematic observations in the atmosphere, and over the ocean and land surfaces can be integrated into near real time policy relevant greenhouse gas budgets to support the UN enhanced transparency framework of the Paris Agreement. A practical roadmap will be provided as well
Increasing surface temperatures in the northern high latitudes due to climate change cause significant changes in the cryosphere. These changes are connected to changes in the biosphere, e.g. to changes in the carbon uptake and release by vegetation. It was found that a trend to earlier snow melt increased the gross primary production of boreal forest in spring during the last decades (Pulliainen et al. 2017). The current knowledge about these interactions is insufficient and uncertainties are high in model predictions on how the carbon cycle and climate feedback will change in the northern latitudes in a changing climate. The project CryoBioLinks will investigate the relationship of cryosphere variables and the carbon uptake of vegetation by using in situ and satellite observations. It will enhance and develop satellite proxies describing key variables of the vegetation carbon uptake in northern high latitudes. ESA CCI snow cover and the SMOS soil freeze and thaw product (Rautiainen et al. 2016) and Sentinel 1-SAR will be utilized to provide information on the timing of snow melt and soil thaw and freeze in spring and autumn. It has been shown that the timing of snow melt can be utilized as a proxy for the start of photosynthetic activity of boreal coniferous forest (Böttcher et al. 2014). Here, we will investigate the suitability of satellite-derived soil thaw in spring to inform on the start of the carbon uptake period. Due to the decrease of winter snow cover, especially in the southern boreal zone, soil thaw might become more relevant than the timing of snow melt for the beginning of photosynthesis in coniferous forest in future. Thus, integrating information about snow melt and soil freeze may improve the robustness of current proxies for the start of the vegetation active period. The investigation will be carried out for selected sites in the boreal zone in Finland and Canada. Eddy covariance data will be utilized to determine the seasonal cycle of photosynthesis and the seasonal and annual integrals of gross primary production. The relationship between the cryosphere variables and the seasonal cycle of photosynthesis and seasonal and annual gross primary production will be analysed at the site-level. The presentation will give an overview about the project and will show the first results for the site-level investigations for CO2 flux measurements sites in the boreal zone.
References:
Böttcher, K., Aurela, M., Kervinen, M., Markkanen, T., Mattila, O.-P., Kolari, P., Metsämäki, S., Aalto, T., Arslan, A.N., & Pulliainen, J. (2014). MODIS time-series-derived indicators for the beginning of the growing season in boreal coniferous forest — A comparison with CO2 flux measurements and phenological observations in Finland. Remote Sensing of Environment, 140, 625-638.
Pulliainen, J., Aurela, M., Laurila, T., Aalto, T., Takala, M., Salminen, M., Kulmala, M., Barr, A., Heimann, M., Lindroth, A., Laaksonen, A., Derksen, C., Mäkelä, A., Markkanen, T., Lemmetyinen, J., Susiluoto, J., Dengel, S., Mammarella, I., Tuovinen, J.-P., & Vesala, T. (2017). Early snowmelt significantly enhances boreal springtime carbon uptake. Proceedings of the National Academy of Sciences of the United States of America, 114, 11081-11086.
Rautiainen, K., Parkkinen, T., Lemmetyinen, J., Schwank, M., Wiesmann, A., Ikonen, J., Derksen, C., Davydov, S., Davydov, A., Boike, J., Langer, M., Drusch, M.T., & Pulliainen, J. (2016). SMOS prototype algorithm for detecting autumn soil freezing. Remote Sensing of Environment, SMOS special issue, 180, 346-360.
Plant primary production, defined as photosynthetic fixation of atmospheric CO2, plays a crucial role in the Earth's carbon fluxes. From local to global scale, e.g. at vegetation stands and landscapes, photosynthesis is referred to as gross primary productivity (GPP), often estimated from net CO2 exchange measurements using the eddy-covariance (EC) technique. EC measurements are the most established way for assessing GPP, however, it involves assumptions and an estimation process since its primary product is the net ecosystem production (GPP reduced by respiration).To accurately monitor and predict the Earth carbon fluxes, a precise characterization of plant photosynthetic efficiency is essential. However, photosynthesis is a very dynamic process that responds to changes in the environment in various ways on different spatial and temporal scales, from seconds to seasons and small changes in photosynthetic efficiency can have a large impact on the global carbon cycle (Rascher & Nedbal, 2006; Schurr et al., 2006; Alonso et al., 2017).
Alternatively to EC measurements, remote sensing (RS) approaches, such as reflectance-based Vegetation Indices (VIs), have been used to study photosynthesis from the assessment of light-use efficiency (LUE) and gross CO2 uptake. The success of reflectance-based vegetation indices often depends on the exact context, including spatial and temporal scales (Gamon et al., 2015, 2019). Established VIs as Normalized difference vegetation index (NDVI) and enhanced vegetation index (EVI) generally track large scale and seasonal variations in GPP, which are related to the greenness of the vegetation. However, these correlations may fall apart under conditions, where the functioning of photosynthesis is unlinked from the pure canopy greenness, such as under stress conditions or in evergreen forests during winter time. More recently developed CCI, proposed as an indicator of changing chlorophyll and carotenoid pigment ratios, seems to track seasonal changes in photosynthetic activity (Gamon et al. 2016). Furthermore, Near-infrared reflectance of terrestrial vegetation index (NIRvref) was proposed as a suitable proxy for global GPP estimates based on MODIS reflectance data (Badgley et al. 2017). NIRvrad (using NDVI times radiance instead of reflectance at 800 nm) used in subsequent studies was shown to hold the NIRvrad-GPP relationship under drought conditions (Badgley et al, 2019).
Yet, another option to track GPP has been investigated intensively in the last decades: The red and far-red fluorescence signals (SIF687 and SIF760), closely related to the efficiency with which light energy is used in the first steps of photosynthesis (the so-called ‘light reactions’). They have been proposed to offer potential improvements over reflectance-based approaches (Ac et al., 2015; Campbell et al., 2019), which is one of the basis for the upcoming FLEX satellite mission, which will become Earth Explorer 8 (Drusch et al., 2017; Mohammed et al., 2019).
In this study we investigated data from a winter wheat field located in the western part of Germany. The study site is mainly dominated by agricultural fields and intensively monitored being part of TERENO (https://www.tereno.net) and a class 1 site within the European ICOS infrastructure (www.icos-cp.eu). The field was equipped with an EC station and D-FloX device providing meteorological data, fluxes as well as hyperspectral reflectance and radiance data, respectively. The fluorescence-based metrics used in this study are SIF760, SIF687, and SIFTOT (derived from the integral under the curve) retrieved by spectral fitting method. These SIF products are furthermore normalized (SIFnorm) with incident radiation between 400 and 700 nm (PAR) to approximate SIF yield. We focus on two measurement campaigns in the 2018 growing season: i) the elongation period of winter wheat from May 9 to 27, when the canopy is green and closed, plants still elongate and fruit set occurs. ii) A period towards the end of the growing season (June 29 to July 1) when the canopy is getting senescent and visibly turns from green to brown.
With this study, we show that reflectance-based VIs were useful to track the greenness on larger temporal scales, but does not depict changes in photosynthesis on sub-diurnal scale. NIRvrad, incorporating PAR, followed better diurnal GPP dynamics than reflectance-based VIs. Both metrics, SIF and NIRvrad provide better representations of the diurnal dynamics in GPP in the closed green canopy than reflectance-based measures. We found, that in the phase of ear emergence, until mid-May hourly GPP values increase from 30 to 50 µmol·m-2·s-1 and then drop again to about 30µmol with fluctuations on diurnal and sub-diurnal scale. This trend is not followed explicitly by any of the calculated RS parameters. Reflectance-based VIs remain stable without meaningful changes at the daily and the sub-daily scale during this period. NIRvrad and the SIF products show variations on diurnal and sub-diurnal scale, with more fluctuations than GPP. Furthermore we found, SIFnorm to show a tendency to decrease during the whole period. On May 9, a clear sky day at the beginning of this period, GPP shows a diurnal course with peak around noon, followed most closely by NIRvrad and SIFTOT. In the second investigated period towards the end of the growing season, GPP continuously decreases from day to day and over the days. The reflectance-based measures follow senescence as a larger seasonal pattern with a steady decrease. NIRvrad shows a slight decrease during this period and still varies on (sub-) diurnal scales. SIF760 and SIF687 signals are already very low (>0.4 mWatt·m-2·sr-1·nm-1). On June 29, a clear sky day, GPP decreases over the day, while reflectance-based VIs show slight diurnal changes but no decrease as GPP. NIRvrad still shows a diurnal course, while SIF760 and SIF687 are below 0.5 mWatt·m-2·sr-1·nm-1.
Our results demonstrate, that for the investigated wheat field, each of these metrics offer different and complementary information. NIRvrad, incorporating PAR, was shown to do better than reflectance-based VIs generally following diurnal GPP dynamics as it tracks more closely the absorbed photosynthetic active radiation and thus better represents actual photosynthetic efficiency. Furthermore it might be able to sense canopy-structural effects. However, it is very complex to untangle canopy structure and how it affects the radiation. Although SIF is highly sensitive to physiological-structural interactions, it is the most direct measure of the photosynthetic activity and thus a valuable indicator for the dynamic changes in plant physiology and cannot be replaced by NIRvrad. Although SIF measured at canopy scale is partly comprised by the plant structure, it additionally provides information on plant physiology and thus helps to understand seasonal GPP patterns. Based on our findings, we suggest the joint use of optical RS parameters, namely reflectance- and radiance-based VIs and SIF to improve current estimates of GPP from sub-diurnal to seasonal scale. In combination they probably help to provide better estimates of actual vegetation function on ecosystem scale as this yields additional insights compared to when they are used alone. Further work is in progress to include them in a full (LUE) model to describe the observations in GPP as a proxy for carbon fixation and to improve forward models of GPP.
Ač, Alexander, et al. "Meta-analysis assessing potential of steady-state chlorophyll fluorescence for remote sensing detection of plant water, temperature and nitrogen stress." Remote sensing of environment 168 (2015): 420-436.
Alonso, Luis, et al. "Diurnal cycle relationships between passive fluorescence, PRI and NPQ of vegetation in a controlled stress experiment." Remote Sensing 9.8 (2017): 770.
Campbell, Petya KE, et al. "Diurnal and seasonal variations in chlorophyll fluorescence associated with photosynthesis at leaf and canopy scales." Remote Sensing 11.5 (2019): 488.
Badgley, Grayson, Christopher B. Field, and Joseph A. Berry. "Canopy near-infrared reflectance and terrestrial photosynthesis." Science advances 3.3 (2017): e1602244.
Badgley, Grayson, et al. "Terrestrial gross primary production: Using NIRV to scale from site to globe." Global change biology 25.11 (2019): 3731-3740.
Drusch, Matthias, et al. "The fluorescence explorer mission concept—ESA’s earth explorer 8." IEEE Transactions on Geoscience and Remote Sensing 55.3 (2016): 1273-1284.
Gamon, J. A. "Optical sampling of the flux tower footprint." Biogeosciences Discussions 12.6 (2015).
Gamon, John A., et al. "A remotely sensed pigment index reveals photosynthetic phenology in evergreen conifers." Proceedings of the National Academy of Sciences 113.46 (2016): 13087-13092.
Gamon, J. A., et al. "Assessing vegetation function with imaging spectroscopy." Surveys in Geophysics 40.3 (2019): 489-513.
Mohammed, Gina H., et al. "Remote sensing of solar-induced chlorophyll fluorescence (SIF) in vegetation: 50 years of progress." Remote sensing of environment 231 (2019): 111177.
Rascher, Uwe, and Ladislav Nedbal. "Dynamics of photosynthesis in fluctuating light." Current opinion in plant biology 9.6 (2006): 671-678.
Schurr, U., A. Walter, and U. Rascher. "Functional dynamics of plant growth and photosynthesis–from steady‐state to dynamics–from homogeneity to heterogeneity." Plant, Cell & Environment 29.3 (2006): 340-352.
Fires affect ecosystems, global vegetation distribution, atmospheric composition, and human-built infrastructure. The climatic, socio-economic, and environmental factors, which affect global fire activity, are not well understood, and thus their contribution is parameterized in global process-based vegetation models. Fire's climatic and ecological characteristics have been successfully identified using data-driven modeling approaches such as machine learning models; however, socio-economic factors at a global scale have not been explored in detail. Humans alter fire activity by different means, e.g., by acting as a source of ignition, fire suppression, and changing fuel availability and structure. These factors cannot easily be integrated into process-based vegetation models. Data-driven models can thus characterize these factors in time and space, enabling their better representation in process-based models. We created an ensemble of random forest models to test several socio-economic variables' importance in predicting fire ignition occurrences on a global scale, starting with a baseline model characterizing climate and vegetation, then training subsequent interactions with a single socio-economic variable (e.g., population density, Gross Domestic Product, and Distance to population centers).
Our models successfully capture the seasonality and spatial distribution of fire hotspots. High ignition occurrence across Sub-Saharan Africa positively influences the models' ability to predict fires in regions with seasonal ignition occurrence. The models, in general, reduce bias in ignition predictions compared to observations when a socio-economic variable known to influence fire ignitions is added to the base model. Our models also demonstrate the importance of specific variables in reducing bias in annual ignition sums between the baseline model predictions and observations, e.g., over Sierra Leone and most of Kenya, population and livestock density reduce bias in annual ignition sums. We also show the power of our models to reproduce fire occurrence seasonality, even over regions where observations of fire ignitions are rare. Finally, we discuss how using data-driven modeling and multiple socio-economic variables can help inform the development of process-based vegetation models.
The CO2 sink associated with Gross Primary Production (GPP) fluxes during photosynthesis is an important component of the global carbon cycle that is influenced by a variety of factors on a wide range of time scales from hourly, daily, seasonal to annual. In this study, we present an assessment of the impact of changing the vegetation state (Leaf Area Index, LAI), climate conditions (e.g. radiation, temperature, humidity, soil moisture) as well as land use/land cover (LULC) on GPP using Earth Observation (EO) datasets and a new photosynthesis model recently implemented in the “Ecland” land surface model. Ecland is part of the Integrated Forecasting System (IFS) at the European Centre for Medium-Range Weather Forecasts (ECMWF) and its photosynthesis model is used operationally in the Copernicus Atmosphere Monitoring Service (CAMS) CO2 analyses and forecasts. The new photosynthesis model is based on the Farquhar, von Caemmerer and Berry model (for C3 plants) that will enable the simulation of Solar Induced Fluorescence (SIF) by the vegetation (through a specific observation operator) for the assimilation of satellite-based SIF data. Compared to the current operational A-gs photosynthesis model, it produces an improved seasonal cycle of GPP with respect to FLUXNET eddy covariance observations. As well as the improved representation of underlying photosynthesis processes, the GPP from cland relies on accurate LULC and LAI maps to upscale the fluxes from the leaf to the vegetation canopy at global scale. This study explores the sensitivity of the GPP to new satellite-based LAI and LULC datasets with an ensemble of simulations. The simulations are performed at 25km and 9km resolutions with annually varying and climatological LAI from the Copernicus Global Land Service (CGLS), fixed and annually varying LULC maps derived from the ESA-CCI land cover products, as well as fixed and annually varying climate forcing from ERA5 re-analysis. The impact of LAI and LULC satellite-based datasets on GPP is evaluated using TROPOSIF data from TROPOMI on board of Sentinel-5P. TROPOMI offers an unprecedented resolution and spatial coverage allowing a detailed assessment of the GPP spatio-temporal variability. Specific emphasis is placed on GPP hotspots such as croplands and forests to assess the strengths and limitations of ecland in preparation for future assimilation of SIF observations in the global CO2 Monitoring and Verification system, currently being developed within the CoCO2 project and CAMS.
Temporally and spatially irregular observations of forest variables through forest inventories imply that the knowledge of the terrestrial carbon (C) cycle is limited. Satellite remote sensing data can provide supplementary observations but cannot achieve the same level of accuracy because it cannot provide a quantitative measure of the organic mass stored in vegetation. In contrast to approaches that attempt the estimation of forest variables with remote sensing data acquired at high-resolution, recent activities started exploring the contribution of coarse resolution observations, which were originally designed for other types of monitoring (e.g., wind speed and direction, soil moisture, ocean salinity, sea ice concentration). Data acquired by missions operating a coarse resolution sensor are appealing in the context of assessing the terrestrial carbon cycle because of global and repeated coverages of all terrestrial surfaces since the late 1970s. In addition, such missions are guaranteed in future decades, which eventually leads to the longest data record of observations of the Earth from space.
The record of backscatter observations collected by the European Remote Sensing Wind Scatterometer (ERS WindScat) and the MetOp Advanced Scatterometer (ASCAT), both operating at C-band (wavelength of 6 cm) is one of the longest available. An almost unbroken time series of backscatter observations at 0.25° spatial resolution exists since 1991 and data continuity is guaranteed in the next decades. Being this the only active microwave dataset available for the 1990s, the scatterometer time series has a unique value to track carbon dynamics in regions with poor coverage from equivalent optical sensors due to persistent cloud cover (tropics) or unfavourable solar illumination (boreal)
Despite the well-known weak sensitivity of C-band backscatter to AGB, reliable wall-to-wall estimates of AGB were derived from high-resolution SAR observations by exploiting multiple observations acquired in a relatively short time interval (Santoro et al., 2011; Santoro et al., 2015). This approach was recently extended to C-band scatterometer data (Santoro et al., submitted) and yielded global estimates of AGB comparable to averages obtained from plot inventory data or LiDAR-based maps of AGB. The uncertainty of our AGB estimates was between 30% and 40% of the estimated value at the pixel level, this being a relevant aspect in the context of an accurate estimation of carbon stocks and changes. In our presentation, we will introduce the AGB retrieval method and discuss strengths and limitations of the AGB estimates.
Starting in 1992, we have now generated almost 30 years of AGB estimates with a spatial resolution of 0.25°. The temporal patterns of AGB match most spatial patterns of canopy cover described in the MEaSUREs Vegetation Continuous Fields (VCF) dataset (Song et al., 2018). Our estimates indicate a constant increase of AGB in most boreal and temperate forests of the northern hemisphere except for regions characterized by disturbances where severe losses in the 1990s have only recently been compensated for. Severe loss of biomass following massive deforestation was identified throughout the wet tropics during the 1990s and the beginning of the 2000 decade. Since the late 2000s, AGB appears to have recovered but without further increments in the most recent years. coming more recently into saturation. Mostly due to the strong increase of biomass in temperate regions, the global AGB density was estimated to have increased by 9% from 71.8 Mg ha-1 Pg in the 1990s to 78.1 Mg ha-1 in the 2010s. Accordingly the AGB stocks in forests decreased slightly from 566 Pg in the 1990s to 560 Pg in the 2000s, then increased to 593 Pg in the 2010s, resulting in an almost 5% net increase during the last three decades. These results will be reviewed in our presentation, and we will give some first insights on the evolution of the terrestrial biomass pool since the start of the COVID-19 pandemic based on the most recent data acquired by ASCAT in 2020 and 2021.
References
Santoro, M., Beer, C., Cartus, O., Schmullius, C., Shvidenko, A., McCallum, I., Wegmüller, U., Wiesmann, A., 2011. Retrieval of growing stock volume in boreal forest using hyper-temporal series of Envisat ASAR ScanSAR backscatter measurements. Remote Sensing of Environment 115, 490–507. https://doi.org/10.1016/j.rse.2010.09.018
Santoro, M., Beaudoin, A., Beer, C., Cartus, O., Fransson, J.E.S., Hall, R.J., Pathe, C., Schmullius, C., Schepaschenko, D., Shvidenko, A., Thurner, M., Wegmüller, U., 2015. Forest growing stock volume of the northern hemisphere: Spatially explicit estimates for 2010 derived from Envisat ASAR. Remote Sensing of Environment 168, 316–334. https://doi.org/10.1016/j.rse.2015.07.005
Santoro, M., Cartus, O., Wegmüller, U., Besnard, S., Carvalhais, N., Araza, A., Herold, M., Liang, J., Cavlovic, J., Engdahl, M.E., submitted. Estimation of above-ground biomass from spaceborne C-band scatterometer observations and LiDAR metrics of vegetation structure. Remote Sensing of Environment.
Song, X.-P., Hansen, M.C., Stehman, S.V., Potapov, P.V., Tyukavina, A., Vermote, E.F., Townshend, J.R., 2018. Global land change from 1982 to 2016. Nature 560, 639–643. https://doi.org/10.1038/s41586-018-0411-9
Vegetation chlorophyll fluorescence retrieval approaches from tower, airborne and satellites platforms are becoming mature and the signal is now commonly used to improve our understanding of the terrestrial carbon cycle. The solar-induced vegetation fluorescence, emitted by the Chlorophyll a molecules as a small radiative flux in the 650–850-nm range is, hence, providing new quantitative information in the understanding of vegetation status from the leaf to the landscape and global scales. The final goal is to use the canopy-leaving fluorescence signal as an unbiased estimate of the photosynthetic activity of the underlying vegetation. However, the correct understanding of the canopy-leaving chlorophyll fluorescence signal, which is small compared to the reflected solar radiation, is not so straightforward.
As part of the FLEX L1B-to-L2 Algorithm Retrieval and Product Development Study, retrieval strategies for photosynthesis-related products are being developed based on the synergistic FLEX–FLORIS and Sentinel 3–OLCI spectral information. Current algorithm developments in the context of the mission are exploring molecular insights of the light harvesting dynamics, imposing direct constraints on the carbon uptake, especially when excessive energy arrives to the vegetation. To establish the link between vegetation fluorescence and the core photosynthetic light reaction dynamics, further advanced signal processing is proposed which allows a quantitative exploitation of the obtained fluorescence signal. Hereby, full spectral information in the region 500-800 nm is used as input for the proper processing of the top-of-canopy fluorescence emission considering a bottom-up pigment molecular-level approach.
One of the essential products proposed is the fluorescence quantum efficiency which is the ratio between the emitted fluorescence quanta and the absorbed quata that trigger the emission. The latter refers to the radiation absorbed by the light-harvesting pigments, with Chlorophyll a molecules as the dominant photoreceptors of the solar incoming radiance. Disentangling the differential absorption of the overlapping pigments is shown based on the spectral fitting of the FLORIS–HR 500-780 nm reflectance product using individual pigment absorption coefficients. The spectrally-resolved fAPAR contribution for Chlorophyll a is retrieved, considering the within-leaf and within-canopy multiple absorption and scattering. The canopy-leaving vegetation fluorescence is further consistently corrected for re-absorption and scattering, whereupon the ratio of the corrected emission over the retrieved absorption is calculated as the fluorescence quantum efficiency (FQE). FQE can be used as a first indicator for the photosynthetic efficiency of the vegetation surface and is indicative for the excitation pressure on the Chlorophyll molecules and by assumption the whole photosynthetic antenna system. Despite the relationship tends to be more complex than that due to the activation of non-photochemical quenching mechanisms which changes the qualitative coupling between fluorescence and photosynthesis, the retrieval of FQE serves as the essential step to quantify more precisely the energy eventually used by the carbon reactions. Further, by using a bottom-up approach to characterize and fit the shape of spectral fluorescence emission, additional information can be gained on the energy partitioning mechanisms in the light-harvesting reactions, through the two photosystems, PSI and PSII.
With these advances in the interpretation of the vegetation fluorescence signal, both quantitatively and qualitatively, the actual light use through photosynthesis and vegetation growth with carbon assimilation will be better quantified by FLEX. Hence, by the retrieval of FQE, combined with additional information on the dynamic regulation of the energy pathways in the light reactions, promising opportunities are presented to improve our understanding of the vegetation dynamics in the global carbon cycle.
The Amazon’s forests are at risk from continuous deforestation and climate change, leading to increased vulnerability to forest degradation (Matricardi et al., 2020). These processes weaken the forests’ environmental services. Meanwhile, regrowing secondary forests post-disturbance and agricultural abandonment have the potential to partially offset carbon losses (Heinrich et al., 2021). Understanding these opposite drivers of carbon dynamics is of great importance as studies find that regions of the Amazon are already acting as a source. At the same time, forest degradation is not accounted for in national commitments to reductions in greenhouse gas emissions (Silva Junior et al., 2021).
Contributing to the European Space Agency’s Regional Carbon Cycle Assessment and Processes – Phase 2 (ESA RECCAP2) project, we explore the use of remote sensing data for the monitoring of changes in the Amazon’s aboveground carbon (AGC) stocks. We used the L-Band Vegetation Optical Depth (L-VOD) from the soil moisture and ocean salinity (SMOS) satellite mission over the 2011-2019 time period as a valuable new asset to reveal the locations and extent of recent changes in AGC over the Amazon biome. The coarse resolution of L-VOD data (0.25°) allows limited attribution to processes occurring at finer scales. We address this by combining high resolution (30 m) landcover data mapping annual forest cover change and new degradation (Vancutsem et al., 2021) with static AGC maps. This allows us to model spatially specific gains and losses from deforestation, degradation and secondary forest regrowth and compare and consolidate these estimates with L-VOD inferred AGC change.
Initial results reveal that areas with significant decreasing AGC trends are five times greater than those showing an increase. The Amazon carbon stocks are declining with a ~2% reduction since 2012. L-VOD top-down and modelled bottom-up estimates agree on areas of greatest loss, though regional disagreements are evident for low biomass/agricultural areas or areas with small-scale disturbances. Deforestation accounts for the greatest carbon losses and is increasingly occurring in secondary forest areas. Losses incurred by forest degradation are estimated to be approximately 65% of those from deforestation. Further, L-VOD inferred changes over areas that are mostly intact old-growth forests reveal considerable inter-annual variability of AGC and reductions in the 2011-2019 time period over the South-Eastern Amazon.
Our findings point towards an overall weakening of the Amazon forest’s potential to mitigate climate change due to increasing deforestation. Therefore, recent pledges by Amazon countries including Brazil at COP26 to end and reverse deforestation by 2030 must be acted upon immediately to avoid its cascading effects, leading to degradation and further future carbon loss.
Heinrich, V. H. A., Dalagnol, R., Cassol, H. L. G., Rosan, T. M., Torres, C., Almeida, D., … Aragão, L. E. O. C. (2021). Large carbon sink potential of Amazonian Secondary Forests to mitigate climate change. Nature Communications, 12, 4–6. https://doi.org/10.1038/s41467-021-22050-1
Matricardi, E. A. T., Skole, D. L., Costa, O. B., Pedlowski, M. A., Samek, J. H., & Miguel, E. P. (2020). Long-term forest degradation surpasses deforestation in the Brazilian Amazon. Science, 369(6509), 1378–1382. https://doi.org/10.1126/SCIENCE.ABB3021
Silva Junior, C. H. L., Carvalho, N. S., Pessôa, A. C. M., Reis, J. B. C., Pontes-Lopes, A., Doblas, J., … Aragão, L. E. O. C. (2021). Amazonian forest degradation must be incorporated into the COP26 agenda. Nature Geoscience, 14(9), 634–635. https://doi.org/10.1038/s41561-021-00823-z
Vancutsem, C., Achard, F., Pekel, J.-F., Vieilledent, G., Carboni, S., Simonetti, D., … Nasi, R. (2021). Long-term (1990-2019) monitoring of forest cover changes in the humid tropics. Science Advances, 7(10). https://doi.org/10.1126/sciadv.abe1603
The availability and temporal dynamics of vegetation biomass, or living and dead fuel, is a main driver for the occurrence, spread, intensity and emissions of fires. Several studies have shown that the fuel build-up in antecedent (wet) seasons can increase burned area in the following (dry) season. In order to estimate fire emissions, the amount and dynamics of fuel loads are commonly estimated using biogeochemical models. Alternatively, data-driven approaches to model fire dynamics using machine learning methods make often use of satellite time series of leaf area index (LAI), the fraction of absorbed photosynthetic active radiation (FAPAR), or of vegetation optical depth (VOD) as proxies of the temporal dynamics in fuel availability. Although LAI or FAPAR time series provide information about the temporal dynamics in vegetation and hence fuels, they cannot be directly used to estimate fire emissions. Alternatively, global or continental maps of fuel beds or maps of above-ground biomass such as from ESA’s Climate Change Initiative (CCI) provide direct estimates of fuel loads for different fuel types, however, they do not provide sufficient temporal coverage in order to assess temporal changes in fuel loads. Here we propose a novel data-driven approach to estimate the temporal dynamics of vegetation fuel loads by combining various Earth observation products with information from databases of ground observations.
Our approach combines the temporal information from LA, FAPAR and VOD time series and from annual land cover maps with the time-invariant information from maps of above-ground biomass and large-scale fuel data bases. Specifically, we are using the LAI and FAPAR from Sentinel-3 and Proba-V, VOD from the VODCA dataset and from SMOS, annual land cover maps from ESA CCI, maps of above-ground biomass (AGB) from ESA CCI and information from the North America Wildland Fuel Database and the Biomass and Allometry Database.
The estimation of fuel loads is based on two different approaches. The first approach makes use of an empirical allometry model to estimate the fuel loads of different biomass compartments of trees and herbaceous vegetation by using total AGB and LAI as input. Based on allometric equations the biomass of stems, branches, leaves, and total woody biomass are estimated. Thereby LAI serves a s a proxy for the temporal dynamics in leaf and herbaceous biomass. Long-term changes in total AGB estimated based on regional non-linear regressions between the spatial patterns of AGB and tree cover, maximum LAI and VOD as predictors. As alternative, the use of novel products of AGB changes such as from BIOMASCAT are explored. The allometric parameters are estimated from the Biomass And Allometry Database.
The second approach makes use of machine learning models to transfer the measurements from the North America Wildland Fuel Database to other regions. Land cover, LAI and AGB from the Earth observation datasets are used as predictors for fuel loads of trees, shrubs, grass, fine and coarse woody debris and duff. Spatial cross-validation is used to estimate, evaluate random forest regression models and to provide uncertainty estimates of fuel loads. The approaches are developed and tested in four study regions: in Brazil, southern Africa, central Asia, and northern Siberia to cover a wide range of ecosystems. First results demonstrate the feasibility to estimate temporal changes in
fuels loads by integrating the respective temporal and spatial information from various Earth observation datasets.
For this work, we acknowledge the European Space Agency for funding of the Sense4Fire (sense4fire.eu) project.
Nature-based carbon sequestration is one of the most straightforward ways to extract and to store carbon dioxide from the atmosphere.
Urban forests hold the promise of optimized carbon storage and temperature reduction in cities. Remote sensing imagery can identify tree location and size, classify trees based on their species, and track tree health. Using multi- and hyperspectral overhead imagery, green vegetation can be separated from various land use types. Moreover, through further refinement of models by texture and contextual information, trees can get spatially separated from bushes and grass covered surfaces. While spectral-based tree identification can achieve accuracy of 90%, additional deep learning models using even noisy labeled data can further improve tree identification models.
Once trees are identified in two-dimensional remote sensing images, allometric models allow to extract tree height and tree growth based on climate data, topography, and soil properties. The biomass of the trees is calculated for tree species using geometrical and phenological models. The carbon stored in trees can be quantified at individual tree level. Furthermore, the models allow to identify areas densely covered by trees to pinpoint bare land where further trees may be planted.
Exploiting land surface temperature maps from satellite thermal measurements of, e.g., the Sentinel or Landsat missions, urban heat island can be mapped out at city scale. Urban heat islands may vary based on season and weather conditions; areas persistently warmer when compared to average city temperature background can be identified from time series of data. The correlation of local temperature, tree cover, and land perviousness helps to identify local climate zones. It also may refine and re-evaluate the definition of Local Climate Zones (LCZ). We employ the PAIRS geospatial information platform to demonstrate a scalable solution for tree delineation, carbon sequestration, and urban heat island identification for three global cities: Madrid, New York City, and Dallas, TX.
Natural and anthropogenic disturbances act as strong drivers of tree mortality, shaping the structure, composition, and biomass distribution of forests. Disturbance dynamics may change over time and vary in space, mainly depending on the climate regimes and land use and land cover change. Although well defined from a mechanistic perspective, different disturbances are currently not well characterized, and limited studies have formally quantified the link between frequency, intensity, and aggregation characterizing different disturbance regimes and biomass patterns and dynamics.
Here, we design a model-based experiment to investigate the links between disturbance regimes at the landscape scale and spatial features of biomass patterns. The effects on biomass of a wide range of disturbance regimes are simulated based on different \mu (probability scale), \alpha (clustering degree), and \beta (intensity slope) that respectively shape the extent, frequency, and intensity of disturbance events. A simple dynamic carbon cycle model is used to simulate 200 years of plant biomass dynamics in response to circa +2000 different disturbance regimes, depending on different combinations of \mu, \alpha, and \beta. Each parameter combination yields a spatially explicit estimate of plant biomass for which different synthesis statistics are estimated (such as, e.g. mean, median, standard deviation, quantiles, skewness). Based on a multi-output regression approach we link these synthesis statistics back to the three disturbance parameters to evaluate the confidence in inferring disturbance regimes from spatial distributions of biomass alone.
Our results show that all three parameters can be confidently reproduced using a reasonable set of statistical features on the biomass spatial distribution. The Nash-Sutcliffe efficiency (NSE) for the prediction of the three disturbance regime parameters exceeds 0.95. A feature importance analysis reveals that the distribution statistics dominate the prediction of \mu and \beta, while features quantifying texture have a stronger connection with \alpha. With the support of biomass observations, like global biomass datasets from the ESA DUE GlobBiomass project, the disturbance regimes at the landscape level can be retrieved under this simulation framework. Despite the current assumptions on primary productivity, autocorrelation, and similarity in post-disturbance dynamics, this study quantifies the association between biomass patterns and underlying disturbance regimes. Given that current earth observation datasets on biomass at high resolution have a very limited temporal range, if any, this approach could provide a unique perspective in deriving aspects of biomass dynamics from high-resolution imagery. Overall, a better understanding and the quantification of disturbance regimes would improve our current understanding of controls and feedback at the biosphere-atmosphere interface and improve current earth system models representing disturbance dynamics.
Land-use and land-cover changes (LULCC) are a major contributor to anthropogenic emissions, making up about 10 % of total anthropogenic CO2 emissions over the last decade, and being the major source of emissions in certain countries. Despite its great importance, estimates of the net CO2 flux from LULCC (ELUC) have high relative uncertainties compared to other components of the global carbon cycle. One major source of uncertainty roots in the underlying LULCC forcing data, which are mostly generated through a combination of Earth observations and other statistical data streams. By implementing a new, high-resolution LULCC dataset (HILDA+) in a bookkeeping model (BLUE), we are able to illustrate spatial and temporal uncertainties in ELUC estimates related to (1) LULCC reconstructions and (2) the spatial resolution of the LULCC forcing. Compared to estimates based on LUH2, which is the LULCC dataset most commonly used in global ELUC models, estimates based on HILDA+ show substantially lower ELUC fluxes and reveal large spatial and temporal differences in component fluxes (e.g., CO2 fluxes from deforestation). In general, the congruence is higher in the mid-latitudes compared to tropical and subtropical regions. However, little agreement is reached on the trend of the last decade between ELUC estimates based on the two LULCC reconstructions. By comparing ELUC estimates from simulations with the same LULCC forcing at 0.01° and 0.25° resolution, we find that component fluxes of estimates based on the coarser resolution tend to be larger compared to estimates based on the finer resolution, both in terms of sources and sinks. The reason for these differences is successive transitions: These are not adequately represented at coarser resolution, which has the effect that - despite capturing the same extent of transition areas - overall less area remains pristine at the coarser resolution compared to the finer resolution. This phenomenon has not been described in studies before. To our knowledge, this is the first study of global ELUC estimates (1) at 0.01° resolution and (2) based on two independently derived, spatial explicit LULCC datasets. The large sensitivity of greenhouse gas fluxes to the land-use forcing highlights the high relevance of Earth-observation to monitor LULCC dynamics, in particular at high resolution. Integration with other data sources on LULCC ranging back in the pre-satellite era, which is a requirement to capture the long timescales of carbon cycle dynamics, should also be a key priority in order to robustly quantify ELUC emissions.
The Land Surface Carbon Constellation study (https://lcc.inversion-lab.com), funded by ESA, aims to investigate the response of terrestrial biosphere’s net ecosystem exchange to climatic drivers. This is performed by combining a process-based model with a wide range of in-situ and remotely sensed observations on local and regional scales. The project aims to demonstrate the synergistic exploitation of satellite observations from active and passive microwave sensors together with optical data, for better characterization of carbon and water cycling on land.
In order to support the development of the model and the data assimilation scheme on the local scale, field campaigns are being carried out at three well-instrumented sites: (1) Sodankylä, Finland, located in a boreal evergreen needleleaved forest biome; (2) Majadas de Tietar, Spain, located in a temperate savanna biome, and (3) Reusel, The Netherlands, located over agricultural land.
At each site, an extensive suite of instrumentation has been installed to measure soil, vegetation and atmospheric properties. Permanent measurements include, among others, meteorological data, sensors measuring soil moisture profiles and the water content of standing vegetation, and eddy covariance systems to measure carbon, water and energy fluxes. Reference instrumentation to measure observables available from satellite remote sensing at local scale (microwave brightness temperature and backscatter, upwelling radiance) have been installed at the sites. These measurements are used to derive parameters such as Vegetation Optical Depth (VOD) and Solar-Induced Fluorescence (SIF) used in local-scale model assimilation experiments. Additional campaign measurements are being carried out to quantify seasonal variations in e.g. LAI, NDVI and above-ground biomass.
We present the main results of the first campaign season in 2021, describing instrumentation, data collection protocols, calibration and data quality control measures. Initial findings of interconnections between various physical processes and variables observed by remote sensing methods are presented. The Land Surface Carbon Constellation study is a collaborative project led by Lund University with participation of The Inversion Lab, CESBIO, University of Edinburgh, University of Reading, TU Delft, TU Wien, MPI-B, University of Valencia, WSL, FZ Jülich and FMI.
Accurate estimates of the net carbon flux from land use and land cover changes (fLULCC) are crucial to understand the global carbon cycle and to support climate change mitigation targets. However, it is difficult to derive fLULCC from observations at larger spatial scale because CO2 fluxes from land-use co-occur with those caused by natural effects (such as CO2 and climate change effects on vegetation growth). To support and complement Earth observations of vegetation and biomass dynamics, models are thus used to separate land-use from natural drivers. Here we investigate in unprecedented regional detail a fundamental difference between the two most frequently used types of models, namely semi-empirical bookkeeping models and process-based dynamic global vegetation models (DGVMs), which relates to how synergistic terms of land-use and natural effects are treated.
The fLULCC estimates from these two model types are not directly comparable: Bookkeeping models, which are used e.g. for fLULCC estimation in the annual global carbon budget of the Global Carbon Project, rely on static, observation-based carbon densities, and flux estimates are based on response curves characterizing the amount of carbon uptake and removal following land use and land cover changes. In contrast, fLULCC estimated by DGVMs is based on a process-based representation of the vegetation dynamics forced by observed (transient) environmental changes. Such a transient DGVM approach is used for the uncertainty assessment of fLULCC in the Global Carbon Project’s budget.
However, the transient DGVM approach includes the so-called Loss of Additional Sink Capacity (LASC), which accounts for environmental impacts on the carbon stock densities of managed land as compared to those of potential vegetation. By contrast, the LASC is not included in bookkeeping models. A comparison of the two types of models is nevertheless possible as DGVMs also enable the fLULCC estimation under constant present-day environmental forcing, which is comparable to bookkeeping models using observed carbon densities. Additionally, DGVMs enable fLULCC under constant pre-industrial environmental forcing which can be used to quantify the LASC.
To shed light into the performance of the varying approaches, this study analyzes the three most common DGVM-derived fLULCC definitions (transient, constant pre-industrial and constant present-day environmental conditions). We quantify differences in fLULCC estimates as well as the corresponding climate- and CO2-induced components resulting from environmental flux changes for 18 regions and by using twelve different DGVMs. The global multi-model mean fLULCC of the transient simulations is 2.0±0.6 PgC yr-1 for 2009-2018, of which ~40% stem from the LASC (0.8±0.3 PgC yr-1). The transient fLULCC accumulated from 1850 onward reached 189±56 PgC with 40±15 PgC from the LASC.
We detect regional hotspots of high LASC values particularly in the USA, China, Brazil, Equatorial Africa and Southeast Asia, which can predominantly be linked to massive deforestation for cropland. While these high LASC values mainly depend on the long accumulations periods for LASC in the temperate zone, high LASC values in the tropical zone result from mostly more recent deforestation on carbon dense ecosystems. In contrast, distinct negative LASC estimates were observed in Europe (caused by early reforestation before the start of the simulated period) and from 2000 onward in the Ukraine (due to recultivation of post-Soviet Union abandoned agricultural land). Such negative LASC estimates indicate that fLULCC under transient DGVM simulations is lower compared to bookkeeping estimates, in the respective regions.
Unraveling the strong spatio-temporal variability of the different DGVM-derived fLULCC estimates, this study shows the need for a harmonized attribution of model-derived fLULCC. To bridge the bias in fLULCC estimation between bookkeeping and DGVM approaches, we propose an approach that includes an adopted mean DGVM-ensemble LASC for a defined reference period. Such harmonized approach would be spatio-temporally robust, enabling a fair attribution of fLULCC, and could provide the needed measures to independently validate policy reporting of fLULCC as well as track the progress towards Global Stocktake.
The implementation of land management is widely included in national climate mitigation strategies as negative carbon technology. The effectiveness of these land mitigation techniques to extract atmospheric carbon is however highly uncertain. The H2020 LANDMARC, Land Use Based Mitigation for Resilient Climate Pathways, project monitors actual land mitigation sites to improve the understanding of their impact on the carbon cycle and focuses on the development of accurate and cost-effective monitoring techniques. Here we aim to assess the ability of satellite-based solar-induced fluorescence (SIF) observations to quantify the impact of land cover changes on the terrestrial gross primary production (GPP) – the carbon fixated during photosynthesis.
We use SIF measurements from the European TROPOMI and GOME-2A sensors to monitor the GPP dynamics following land cover change. We evaluate the impact of changed land cover on GPP for two distinct case studies with (1) an increasing trend in GPP (negative carbon emission) and (2) a decreasing trend in GPP (positive carbon emission) by examining the time-series of SIF signal over both cases. The positive carbon emission case concerns a massive wildfire in South-East Australia in which 220 km2 of Eucalypt Forest burned down from January to February 2019. The negative emission case examines China’s large scale afforestation project, the Three-North Shelterbelt Program (TNSP), which started in the 1980’s to combat desertification.
We analysed the TROPOMI SIF signal over burned and surrounding unburned area to elucidate the reduction in GPP following the destruction of vegetation in the positive carbon emission case. We detected a strong reduction in SIF (70%) immediately after the fire and smaller reductions in SIF (22%) over the winter period, June–July, when vegetation is mostly dormant. The reduction in SIF signal was scaled to loss in GPP via an obtained empirical linear SIF—GPP relation. Namely, positive agreement (R2=0.73) was discerned between TROPOMI SIF and GPP from a neighbouring flux site, located in a similar ecosystem. Overall, we identified a GPP deficit of ~9.05 kgCm-2, or 2TgC, for the first 10 months after the fire. This deficit is 1-2 magnitudes larger than the anomalies linked to intense summer droughts, indicating the significant long-term effects of local wildfires on the carbon cycle.
For the negative carbon emission case, we analyse long timeseries of GOME-2A SIF (2007—2020) over the TNSP region. We use statistical data on local afforestation in synergy with the SIF observations and compare yearly and seasonal trends for different sub-regions in the area in order to reveal the impact of the implementations on the regional carbon sink. Large scale monitoring of different land management strategies, especially in difficult dryland areas such as the TNSP region, and their success rate is an important step to support policy makers in designing and upscaling of land mitigation techniques.
The FLUXCOM initiative (www.fluxcom.org) conducted an extensive intercomparison of machine learning models that integrates satellite and in-situ observations for assessing variations of terrestrial carbon fluxes globally. This intercomparison yielded a large ensemble of gridded data products that are used extensively by the scientific community for questions related to biosphere-atmosphere interactions. The results of the FLUXCOM initiative yielded insights on uncertainties and limiting factors related to both, the overall approach, and to specific choices and implementations, which provides a roadmap for making progress in this field.
Here we want to report on the strategy and first results of FLUXCOM-X, which is the next generation of the FLUXCOM initiative. We are focusing on two distinct aspects for improving and accelerating progress of the FLUXCOM approach. Our first goal is to feed more information to the system. Specifically, we strive for the integration and synergistic exploitation of different earth observation data streams (optical, thermal, microwave, fluorescence) and for utilizing an improved and extended data basis of in-situ flux tower data. Our second objective is to develop the capacity to generate rapid experimental cycles by automatizing data and processing pipelines ranging from the ingestion of newly acquired eddy covariance and satellite data to the evaluation of the global products. Together, both improvements will enable an efficient exploration of novel methodological and data opportunities, as well as global carbon flux data product updates in a fairly operational way. Thus, FLUXCOM-X is not another static intercomparison but a path of experimental cycles with monitored performance that generates a diverse ensemble of products through scientific exploration.
Here we present first results of global gross primary productivity and net ecosystem exchange products at 0.05° spatial and hourly temporal resolution for the period 2001-2020. We assess the progress we have made and lessons learned so far based on site-level cross-validation results and cross-consistency checks of global carbon flux products against previous FLUXCOM results and independent data streams. We conclude with an outlook for the synergistic integration of FLUXCOM with atmospheric inversion approaches for obtaining a unified data-driven approach for monitoring the terrestrial carbon cycle from space.
Methane (CH4) is an important anthropogenic greenhouse gas with a global warming potential 28 times that of carbon dioxide on a 100-year time horizon. Global and regional greenhouse gas budgets can be estimated from various modelling tools, but there are still discrepancies in regional budgets and seasonality, depending on model setups such as observations used as constraints. Recently, the number of atmospheric measurements from satellites is rapidly increasing and their retrieval quality is improving continuously. As those measurements have much higher spatial coverage compared to ground-based observations, the potential to better constrain spatial distribution is expected to be high. However, the availability of the satellite data depends highly on the sunlight and clouds, and therefore, seasonality may not be as well constrained as the ground-base measurements in regions where high-precision continuous surface data are available, such as Europe.
In this study, we examine the potential of satellite data to constrain CH4 budgets, especially at northern high latitudes (NHL) and Europe, using the CarbonTracker-Europe CH4 atmospheric inverse model. Fluxes are estimated by constraining the model using three sets of atmospheric CH4 observations; 1) ESA Sentinel-5 Precursor TROPOMI XCH4 retrievals of SRON Operational data, 2) TROPOMI XCH4 retrieved based on WFM-DOAS algorism, and 3) ground-based observations of surface CH4 from global and regional networks, e.g. ICOS and NOAA. The global CH4 fluxes are estimated for 2018, and analysed by comparing those from the different setups, and to those from the multi-model intercomparison study by GCP-CH4.
The global total CH4 emissions are in good agreement, regardless of the assimilated observations. However, the regional budgets, spatial distribution and regional seasonality show show differences, such that NHL wetland CH4 emissions are decreased from the prior when satellite data are assimilated,especially in summer. This was consistent regardless of the retrieval products. However, when the surface data are assimilated, the wetland emissions in NHL increased from the prior.
Photosynthesis is one of the most important mechanisms that enable life on Earth. It is a process where sunlight is converted to chemical energy by synthesizing sugars using water from the soil and carbon dioxide from the air. This mechanism is fundamental to life because it generates oxygen as a byproduct. Therefore, understanding and observing the global photosynthesis rate is crucial to have a better grasp of our climate system and the Earth's carbon cycle.
One commonly used proxy value to measure photosynthetic activity of plants is solar-induced chlorophyll fluorescence (SIF), which is a subtle light emission signal emitted around the red and the near-infrared wavelengths of the electromagnetic spectrum. Although the dynamic correlation between SIF and the non-photochemical and photochemical quenching mechanisms is still a scientific research topic in development, SIF has been evidenced to serve as an in-vivo indicator of the photosynthetic activity.
SIF can be measured at the Top Of Canopy (TOC) level using tower measurements or at the airborne and the satellite level using remote sensing techniques. To retrieve SIF globally, satellite remote sensing with high resolution spectrometers is required since the SIF signal is relatively weak and therefore difficult to separate from the satellite measured radiance. Several satellite missions currently provide a SIF product at discrete wavelengths or narrow spectral intervals, such as the TROPOspheric Monitoring Instrument (TROPOMI) on board Sentinel-5 and the Orbiting Carbon Observatory-2 (OCO-2). In the close future, the FLuorescence EXplorer (FLEX) mission from ESA plans to provide spectrally resolved fluorescence spectra as one of the photosynthesis-related mission products.
Current methods to retrieve SIF are statistically-based and usually utilize solar Fraunhofer lines to disentangle the SIF signal from the atmospheric and surface contribution of the satellite measured radiance. The solar Fraunhofer lines are "dark" absorption spectral lines that are convenient to use to discern the additive SIF signal from the surrounding reflected radiance. In general, any absorption feature either telluric- or solar-originated is an advantageous region to disentangle the SIF contribution.
In this work, we have focused on the SIF retrieval at the solar lines to reduce, as much as feasible, the possible interference with the aerosol scattering effects while keeping the possibility to expand the retrieval to the oxygen-A region in a second phase. The proposed retrieval is an adaption of the Peak Height method initially developed to exploit the oxygen bands at the TOC level. This methodology is based on exploiting the peaks' height and shape in the surface apparent reflectance generated by the emission of the fluorescence signal. As a proof of concept, we have adapted the Peak Height method to the solar lines (around 750-760 nm) and assessed the performance of the proposed method with a simulated database. Additionally, we will also present its potential application on TROPOMI scenes and compare the results with the existing SIF products.
We know that tropical peatlands are among the most carbon dense ecosystems globally¹ but their distribution and total below-ground carbon stock remain highly uncertain, with recent estimates of the latter ranging from 105 (70–130) to 215 (152–288) Pg C²,³ (note the non-overlapping 95% confidence intervals). We also know that large areas of tropical peatlands have been degraded and drained with Indonesia being a cautionary tale: in 1997 alone an estimated 0.81 to 2.57 Pg C, or 13–40% of global annual fossil fuel emissions, were released from Indonesian peatland fires⁴. Whilst 80% of South-East Asian peatlands are already cleared and drained⁵, the known peatland areas of the Amazon and Congo basins are believed to be largely intact³,⁶. Protecting and restoring tropical peatlands can make a significant contribution to limiting CO₂ emissions and global warming, but policy instruments such as REDD+ and wider Nationally Determined Contributions to the Paris Agreement⁷, must be informed by high resolution maps of peatland distribution and carbon density.
Peru is known to host significant areas of peatland such as the Pastaza-Marañón Foreland Basin (PMFB) which has been estimated to contain 3.14 (0.44–8.15) Pg C (including above and below-ground components)⁸. However, visual examination of remote sensing imagery and published wetland maps⁹ suggest that there could be substantial peatlands in Peru whose distribution and carbon stocks remain unknown. Moreover, maps of even the best quantified peatlands remain highly uncertain, in large part due to a lack of understanding of peat thickness distribution. We also lack any quantitative assessment of land-use induced greenhouse gas emissions in Peruvian peatlands, despite varied and increasing threats to these ecosystems⁶. New legislation in Peru mandates the protection of peatlands for the purposes of climate change mitigation¹⁰, but will require maps of peatland distribution and their disturbance at nationally relevant scales.
In this talk, we present new maps of peat thickness (Fig. 1) and peat carbon distribution across lowland Peruvian Amazonia, amounting to a below-ground stock of 5.4 (2.6–10.6) Pg C. These results are driven by machine learning models which combine the largest database of peat observations ever collected in Peru, with remote sensing imagery including various sentinel-2 bands and indices. We reveal highly variable peat thickness and substantial new peatland regions in basins such as the Napo, Putumayo and Ucayali. In turn, we apply our maps and national land-cover change data to show small but increasing areas of forest loss, and related CO₂ emissions from peat decomposition.
Our results may be used to inform the implementation of recent Peruvian legislation enacted to reduce GHG emissions¹⁰, and call for the protection of this substantial and relatively intact carbon store to prevent a similar scenario to South-East Asian peatlands.
References
1. Honorio Coronado, E. N. H., Hastie., A, et al. Intensive field sampling increases the known extent of carbon-rich Amazonian peatland pole forests. Environ. Res. Lett. 16, 74048 (2021).
2. Ribeiro, K. et al. Tropical peatlands and their contribution to the global carbon cycle and climate change. Glob. Change Biol. 27, 489–505 (2021).
3. Dargie, G. C. et al. Congo Basin peatlands: threats and conservation priorities. Mitig. Adapt. Strateg. Glob. Chang. 24, 669–686 (2019).
4. Page, S. E. et al. The amount of carbon released from peat and forest fires in Indonesia during 1997. Nature 420, 61–65 (2002).
5. Mishra, S. et al. Degradation of Southeast Asian tropical peatlands and integrated strategies for their better management and restoration. J. Appl. Ecol. 58, 1370–1387 (2021).
6. Roucoux, K. H. et al. Threats to intact tropical peatlands and opportunities for their conservation. Conserv. Biol. 31, 1283–1292 (2017).
7. Girardin, C.A.J., et al. Nature-based solutions can help cool the planet — if we act now. Nature 593, 191–194 (2021).
8. Draper, F. C. et al. The distribution and amount of carbon in the largest peatland complex in Amazonia. Environ. Res. Lett. 9, 124017 (2014).
9. Hess, L. L. et al. Wetlands of the Lowland Amazon Basin: Extent, Vegetative Cover, and Dual-season Inundated Area as Mapped with JERS-1 Synthetic Aperture Radar. Wetlands 35, 745–756 (2015).
10. MINAM. Decreto Supremo N° 006-2021-MINAM (2021).
The ecosystems of the dry tropics are in flux: the savannas, woodlands and dry forests that together cover a greater area of the globe than rainforests are both a source of carbon emissions due to deforestation and forest degradation, and also a sink due to the enhanced growth of trees. However, both of these processes are poorly understood, in terms of their magnitude and causes, and the net carbon balance and its future remain unclear. This gap in knowledge arises because we do not have a systematic network of observations of vegetation change in the dry tropics, and thus have not, until now, been able to use observations of how things are changing to understand the processes involved and to test key theories.
Satellite remote sensing, combined with ground measurements, offers the ideal way to overcome these challenges, as it can provide regular, consistent monitoring at relatively low cost. However, most ecosystems in the dry tropics, especially savannas, comprise a mixture of grass and trees, and many optical remote sensing approaches (akin to enhanced versions of the sensors on digital cameras) struggle to distinguish changes between the two. Long wavelength radar remote sensing avoids this problem as it is insensitive to the presence of leaves or grass, and also is not affected by clouds, smoke or the angle of the sun, all of which complicate optical remote sensing. Radar remote sensing is therefore ideal to monitor tree biomass in the dry tropics. We have successfully demonstrated that such data can be used to accurately map woody biomass change for all 5 million sq km of southern Africa.
In SECO we will create a network of over 600 field plots to understand how the vegetation of the dry tropics is changing. and complement this with radar remote sensing to quantify how the carbon cycle of the dry tropics has changed over the last 15 years. This will provide the first estimates of key carbon fluxes across all of the dry tropics, including the amount of carbon being released by forest degradation and deforestation and how much carbon is being taken up by the intact vegetation in the region. By understanding where these processes are happening, we will improve our knowledge of the processes involved.
We will use these new data to improve the way we model the carbon cycle of the dry tropics, and test key theories. The improved understanding, formalised into a model, will be used to examine how the dry tropics will respond to climate change, land use change and the effects of increasing atmospheric CO2. We will then be able to understand whether the vegetation of the dry tropics will mitigate or exacerbate climate change, and we will learn what we need to do to maintain the structure of the dry tropics and preserve its biodiversity.
Overall, SECO will allow us to understand how the vegetation of the dry tropics is changing, and the implications of this for the global carbon cycle, the ecology of savannas and dry forests, and efforts to reduce climate change. The data we create, and the analyses we conduct will be useful to other researchers developing methods to monitor vegetation from satellites, and also to those who model the response of different ecosystems to climate and other changes. Forest managers, ecologists and development practitioners can use the data to understand which parts of the world's savannas and dry forests are changing most, and how these changes might be managed to avoid negative impacts that threaten biodiversity and the livelihoods of the 1 billion, mostly poor, rural people who live in this region.
Increasing atmospheric CO2 concentration will have a direct impact on the carbon cycle though the stimulation of photosynthesis. Free Air CO2 Enrichment (FACE) experiments have been used to quantify this ‘fertilisation’ effect under CO2 concentrations anticipated for the middle and later decades of this century. There is increasing evidence that the increase in CO2 to date has enhanced productivity, but attribution to CO2 fertilisation remain a challenge. As the length of the satellite record extends, and new sensors and retrievals develop, satellite observations of the biosphere will be crucial for providing the data for model integration and improving our confidence in quantifying elevated CO2 impacts on the carbon cycle. An important emerging retrieval for studies of photosynthesis is solar induced fluorescence (SIF), which is emitted during photosynthesis and differs from reflectance-based metrics as it provides a measure of activity rather than capacity. Strong empirical relationships are observed between SIF and measures of photosynthesis, and SIF-focussed missions are currently in development for launch in the next few years. However, we still require detailed SIF data from the field to understand and interpret space-based SIF retrievals.
We have collected hyperspectral data from a UAV platform at a FACE experiment in an oak forest in the UK to develop an understanding of the signals associated with elevated CO2 that can be measured from remote sensing platforms. This experiment is unique as it is the only FACE experiment in a mature temperate forest; ecosystems which are responsible for a substantial component of global biosphere carbon sequestration. Using a dual-field-of-view spectrometer system mounted on the UAV, we are able to collect both full VIS-NIR reflectance at high resolution, and very high resolution reflectance in the red-edge region for SIF retrieval, from the forest canopy abovEar e each of the treatment arrays (30 m diameter). Early results have indicated that SIF yields (SIF per incoming photosynthetically active radiation) are higher under elevated CO2, which may be attributed to physiological and or leaf area effects. In this presentation we will present the response of both SIF and other reflectance metrics to elevated CO2 from campaigns that span different times in the season and different seasons, as well as explore changes in spectral features associated with the increase in CO2. We will also place the treatment-level responses in context with the wider forest using hyperspectral data collected during an airborne campaign conducted on the same day as a UAV campaign, as well as data from the longer-term satellite record from Sentinel 2. We will discuss the potential for measuring the the impacts of increasing CO2 on temperate forests from space.
Long-term global monitoring of terrestrial Gross Primary Production (GPP) is crucial for assessing ecosystem response to global climate change. In recent decades, great advances have been made in estimating GPP and many global GPP datasets have been published. These datasets are either based on observations from optical remote sensing, are upscaled from in situ measurements, or rely on process-based models. Although these approaches are well established within the scientific community datasets nevertheless differ significantly.
Here, we introduce the new VODCA2GPP product (Wild et al., 2021), which utilizes microwave remote sensing estimates of Vegetation Optical Depth (VOD) to estimate GPP at global scale for the period 1988 -2020. VODCA2GPP applies a previously developed carbon sink-driven approach (Teubner et al., 2019, 2021) to estimate GPP from the Vegetation Optical Depth Climate Archive (Moesinger et al., 2020), which merges VOD observations from multiple sensors into one long-running, coherent data record. VODCA2GPP was trained and evaluated against FLUXNET in situ observations of GPP and compared against largely independent state-of-the-art GPP datasets from MODIS, FLUXCOM GPP and the TRENDY-v7 process-based model ensemble.
The site-level evaluation with FLUXNET GPP indicates an overall robust performance of VODCA2GPP with only a small bias and good temporal agreement. The comparisons with MODIS, FLUXCOM and TRENDY show that VODCA2GPP exhibits very similar spatial patterns across all biomes but with a consistent positive bias. In terms of temporal dynamics, a high agreement was found for regions outside the humid tropics, with median correlations around 0.75. Concerning anomalies from the long-term climatology, VODCA2GPP correlates well with MODIS and TRENDY-v7 GPP (Pearson’s r: 0.53 and 0.61) but less well with FLUXCOM GPP (Pearson’s r: 0.29). A trend analysis for the period 1988-2019 did not exhibit a significant trend in VODCA2GPP at global scale but rather suggests regionally different long-term changes in GPP. For the shorter overlapping observation period (2003-2015) of VODCA2GPP, MODIS GPP, and the TRENDY-v7 ensemble significant increases of global GPP were found. VODCA2GPP can complement existing GPP products and is a valuable dataset for the assessment of large-scale and long-term changes in GPP for global vegetation and carbon cycle studies. The VODCA2GPP dataset is freely accessible at TU Wien Research Data (https://doi.org/10.48436/1k7aj-bdz35; Wild et al., 2021).
Moesinger, L., Dorigo, W., de Jeu, R., van der Schalie, R., Scanlon, T., Teubner, I., and Forkel, M. & 2020: The global long-term microwave Vegetation Optical Depth Climate Archive (VODCA). Earth Syst. Sci. Data, 12, 177–196, https://doi.org/10.5194/essd-12-177-2020.
Teubner, I. E., Forkel, M., Camps-Valls, G., Jung, M., Miralles, D. G., Tramontana, G., van der Schalie, R., Vreugdenhil, M., Mösinger, L. & Dorigo, W., 2019: A carbon sink-driven approach to estimate gross primary production from microwave satellite observations, Remote Sens. Environ., 229, 100–113, https://doi.org/10.1016/j.rse.2019.04.022.
Teubner, I. E., Forkel, M., Wild, B., Mösinger, L. & Dorigo, W., 2021: Impact of temperature and water availability on microwave-derived gross primary production. Biogeosciences, 18, 3285–3308, https://doi.org/10.5194/bg-18-3285-2021.
Wild, B., Teubner, I., Moesinger, L., Zotta, R., Forkel, M., van der Schalie, R., Sitch, S., Dorigo, W., 2021: VODCA2GPP – A new global, long-term (1988–2020) GPP dataset from microwave remote sensing. Earth Syst. Sci. Data Discuss. [preprint] https://doi.org/10.5194/essd-2021-209, in review, 2021.
Accurate quantification of gross primary productivity (GPP) is critical to understand the global carbon cycle, and how ecosystem primary productivity might respond to climate change. However, terrestrial GPP is viewed as the largest and most uncertain portion of the global carbon cycle. Its estimation at regional to global scale is still a challenge due to the high variability of GPP across space and time, and to the limited understanding of GPP drivers at all spatial and temporal scales.
The recent increase in availability of the complementary Copernicus Sentinel data (S-2, S-3, and S-5p) and products (e.g. OGVI-FAPAR, OTCI, SIF, etc.) at high spatial and temporal resolutions offers a new opportunity to quantify the dynamics of terrestrial ecosystem primary productivity with unprecedented detail. Therefore, the Sen4GPP project aims to develop algorithms that can synergistically exploit data from Copernicus Sentinel missions in order to better characterise GPP in space and time. A parallel objective is to determine the informational content brought by each Copernicus Sentinel mission on the GPP estimates, relative to their spatio-temporal resolutions and coverage and to their constraint on the biogeochemical processes controlling gross carbon uptake by terrestrial ecosystems.
Three different approaches are considered for the estimation of GPP in the Sen4GPP project: 1) Light Use Efficiency (LUE) models, based on the concept that ecosystem GPP is a function of the amount of photosynthetically active radiation (PAR) intercepted by a canopy, the fraction of that PAR that is actually absorbed by the canopy, and interacting environmental stress factors, 2) SIF-based approach, as it has been demonstrated recently that SIF and GPP hold a strong linear relationship at the daily-to-weekly and ecosystem-scale sampling of satellite remote sensing data, and 3) machine learning approach, which is the most data-adaptive method by design as it uses machine learning to extract the functional relationships from observations (in situ and EO) at site level.
In this contribution we will present the status of the project, and in particular the first results from the implementation of the different GPP estimation approaches.
The new generation of satellite missions from ESA has opened new opportunities to understand the complex dynamics of the earth system. Specifically, the new red-edge bands from Sentinel-2 can improve gross primary production (GPP) prediction at the regional and global scale. In this contribution, we will present how the optical information and vegetation indices (VIs) retrieved from Sentinel-2 can be used to predict GPP. We compiled 2636 imagery for 58 eddy covariance sites (2015-2018) that cover a broad geographical (from a latitude of 34.3 to 67.8) and biome range (croplands, deciduous broadleaf forests, evergreen needleleaf forests, grasslands, mixed forest, open shrublands, savannas, and wetlands). We compute several VIs, including red-edge vegetation indices such as chlorophyll index red (CIR), and other VIs such as normalized difference vegetation indices (NDVI) and Near Infrared Reflectance of vegetation (NIRv), as well as the novel kNDVI. Then, we compare the performance of each index to predict the GPP derived from the eddy covariance tower using linear regressions in a cross-validation scheme that avoids spatio-temporal auto-correlation. Furthermore, we explore how much the prediction of GPP is improved using machine learning techniques that consider VIs and spectral bands. Finally, as the different number of observations per vegetation impacts the prediction of GPP, we explore how various dataset balancing techniques can improve the prediction (i.e. A high the frequency of observations for a certain vegetation type can bias the model, underrepresenting other vegetation types). Using linear regressions based on NIRv, we achieved prediction powers of R210−fold = 0.56 and an RMSE10−fold = 2.75 [μmol CO2 m−2 s−1]. Using CIR, and kNVDI, we achieved significantly higher predictive power, up to R210−fold ≈ 0.6, and with a lower RMSE10−fold ≈2.6 [μmol CO2 m−2 s−1]. Using spectral bands and VIs jointly in a machine learning prediction framework we improved GPP prediction with a R210−fold = 0.71, and RMSE10−fold = 2.23 [μmol CO2 m−2 s−1]. We also found that balancing techniques represent an improvement in the prediction of GPP and need to be considered for future upscaling exercises. The proposed approach can estimate GPP at a level of accuracy comparable to previous works, which, however, required additional meteorological drivers with the associated uncertainty. The presented approach opens new possibilities to predict GPP at high spatial resolutions across the globe from Sentinel-2 data only.
Wildfires represent one of the major causes of ecosystem disturbance and ecological damage. Besides influencing atmospheric chemistry and air quality in terms of emitted greenhouse gases and the presence of aerosol in the atmosphere, they change land surface properties, causing loss of vegetation and impacts on forestry economy and local agriculture economy. Accurate knowledge of location and extent of a burned area (BA) is important for damage assessment and for monitoring vegetation restoration.
The present availability of Sentinel-2 (S2) multispectral data every 5-days on the same target area represents a unique opportunity to systematically produce BA maps at medium-high spatial resolution (20 m, which is the resolution of the SWIR bands). Several investigations demonstrated the suitability of S2 to detect BA. A continuous and systematic processing of S2 data potentially allows researchers to build a complete record of BAs useful to derive statistics about the impact of forest fires during the fire season. BA databases are available for instance through the European Forest Fire Information System (EFFIS), whose Rapid Damage Assessment (RDA) module maps BAs by analysing MODIS and VIIRS data having a spatial resolution that is coarser than the S2 one (although presently EFFIS-derived BAs are visually verified using S2 images too).
In this study, a BA record for the 2019-2021 fire seasons (June 1st - September 30th), derived from S2 and ancillary data, is presented. It was produced, for Italy, by taking advantage of a fully automatic processing chain, based on the AUTOmatic Burned Areas Mapper (AUTOBAM) tool proposed in Pulvirenti et al., (2020). AUTOBAM is an automated processor conceived for near real-time (NRT) mapping of BA using S2 data. To generate the BA record, S2 data are complemented by ancillary data, namely MODIS-derived and VIIRS-derived active fire products, as well as by fire notifications. Italy was chosen because the AUTOBAM tool was originally designed to respond to a request by the Italian Department of Civil Protection (DCP) regarding a systematic mapping of BAs at medium-high spatial resolution. Moreover, notifications from the firefighting fleet belonging to Joint Air Operating Centre (coordinated by DCP) and from the Unified Permanent Fire Protection Unit (provided by regional institutions) are available in NRT in Italy. Finally, burn perimeters derived from local surveys done by Carabinieri Command of Units for Forestry, Environmental and Agri-food protection are available too for validation purposes.
AUTOBAM uses level 2A (L2A) surface reflectance products to work with data corrected from the atmospheric effects and to take advantage of the availability of a scene classification map, which is useful to mask clouds, snow, and water bodies. As soon as new L2A products are available through the Copernicus Open Access Hub, they are automatically downloaded and processed. The processing firstly computes three spectral indices, namely the Normalized Burn Ratio (NBR), the Normalized Burned Ratio 2 (NBR2), and the Mid-Infrared Burned Index (MIRBI). These indices are defined as:
NBR=(ρ_NIR-ρ_(SWIR_L))/(ρ_NIR+ρ_(SWIR_L) ) (1)
NBR2=(ρ_(SWIR_S)-ρ_(SWIR_L))/(ρ_(SWIR_S)+ρ_(SWIR_L) ) (2)
MIRBI=〖10∙ρ〗_(SWIR_L)-〖9.8∙ρ〗_(SWIR_S)+2 (3)
Then, AUTOBAM applies a change detection approach that compares, pixel by pixel, the values of the indices acquired at current time with the values derived from the most recent cloud-free S2 data. By default, the latter data are acquired 5 days before current ones (corresponding to the S2 revisit time), but cloud cover may lengthen the time between the acquisitions. Pixels covered by clouds are masked out. BA mapping is performed by using different image processing techniques like clustering, automatic thresholding and region growing. Output maps are finally resampled to a common grid whose pixel size is 20m.
To generate a BA record, omission errors due to clouds or smoke do not represent a big problem because a missed BA can be detected using one of the subsequent S2 acquisitions over the same area (AUTOBAM systematically processes all the S2 data whose cloud cover is less than 50%). Conversely, commission errors due to clouds not perfectly detected in the L2A data, or to changes not related to fires (e.g., due to agricultural activities like harvesting) represent a critical aspect. To deal with commission errors, each BA includes three quality flags related to 1) the presence of an active fire according to MODIS; 2) the presence of an active fire according to VIIRS, 3) a fire notification. As for points 1) and 2), MODIS and VIIRS active fire data are systematically acquired and resampled to the common grid mentioned before (nearest neighbour). A buffer zone with buffering distance corresponding to half of the pixel size of the active fire data is created around each active fire point. If the BA overlaps the buffer zone, the corresponding quality flag assumes positive values (otherwise is 0). The value of the flag depends on the time difference between the S2 acquisition from which the BA is detected and the MODIS/VIIRS acquisition from which the active fire was detected. A maximum difference of 30 days is admitted. A similar procedure is applied for the notifications; in this case a nearest neighbour approach is used to transform the coordinates of a reported fire into points of the common grid and then, a buffer zone of 500m is created to verify the overlap with the S2-derived BAs. Only BAs with at least one quality flag >0 are selected to build the BA record.
The processing chain described above was applied to all the S2 observations of Italy in the period June 1st - September 30th of years 2019-2021. For 2019-2020, the AUTOBAM-derived BA record was compared to the burn perimeters derived from local surveys to verify their reliability (fire perimeters for 2021 are not yet available). For this purpose, even the perimeters were resampled to the common grid. The burn perimeters were required to overlap an AUTOBAM-derived BA; the size over the overlapped area was required to exceed 20% of both the AUTOBAM-derived BA and the area included in each perimeter. Burn perimeters < 1 ha were excluded. It was found that AUTOBAM was able to detect about 75% of the burn perimeters; 60% of the burn perimeters had at least a quality flag >0. This outcome indicates that the proposed method based on the use of the AUTOBAM processor has potential to generate a BA record.
South America is home to some of the world’s most important ecosystems, such as the Amazon, Cerrado, and Chiquitania forests. At the same time, it is a region of massive land conversion for the sake of increased production of commodities consumed globally. Across South America, the expansion of commodity land uses has underpinned substantial economic development at the expense of natural land cover and associated ecosystem services. In this paper, we show that such human impact on the continent’s land surface, specifically land use conversion and natural land cover modification, expanded by 268 million hectares (Mha), or 60%, from 1985 to 2018. By 2018, 713 Mha, or 40%, of the South American landmass was impacted by human activity. This is equivalent to 21.6 soccer fields of natural land cover being impacted by human activity every minute for 34 years. Changes in land cover of this magnitude have important consequences to climate at regional and global scales by altering fluxes of energy, water, and greenhouse gas emissions. Since 1985, the area of natural tree cover decreased by 16%, and pasture, cropland, and plantation land uses increased by 23, 160, and 288%, respectively. Low-intensity, low-productivity pastureland replacing natural vegetation and the widespread phenomenon of cropland replacing pastureland are two important dynamics that reflect the overall intensification of land use across South America. Beyond intensive land uses, a substantial area of disturbed natural land cover, totaling 55 Mha, had no discernable land use, representing land that is degraded in terms of ecosystem function but not economically productive. This long-lasting transitional land category may be associated with land speculation or land-tenure establishment. Monitoring natural land cover from initial disturbance to its final land use outcome is necessary to better understand land use pathways and to fully account for associated greenhouse gas emissions. Results presented here illustrate the extent of ongoing human appropriation of natural ecosystems in South America, which intensifies threats to ecosystem-scale functions. Such data, associated with emissions factors, can facilitate national greenhouse gas accounting efforts.
Fires in the tropics are driven by climate and land-use change. In the Amazon, fires are linked to biomass burning post deforestation and degradation fires are caused by extreme droughts. Earth system models predict an increase in the intensity of dry seasons in this region in the 21st century. Therefore, carbon emissions from drought induced fires have the potential to counteract pledged reductions of deforestation in the next decades, yet they are not included in national carbon emission estimates. Further, air pollution caused by fires has been linked to seasonal upturns in respiratory diseases affecting the population in fire prone areas of Brazil. Against the backdrop of the current COVID-19 pandemic, air pollution can potentially increase the risks of hospitalisations and mortality.
Improved assessments of fire emissions and their impact on air quality are therefore of high importance. Spatially specific estimations of fire emissions are made possible through a range of satellite products that are now available. We employ a remote sensing approach using observations of burned area and static biomass maps from the ESA CCI project to derive woody dry matter burned as a biome-specific function of unburned biomass and combine these with existing estimates of grassland and crop residue fuel consumption. Dry matter burned is converted to emission using a database of available emission factors. Based on this methodology we present initial estimates of dry matter burned and trace gas emissions for the entire Amazon basin and the Brazilian Cerrado at monthly intervals and compare our estimates to those of the global fire emissions database (GFED4). This allows us to identify areas of uncertainty in current emission estimates and present alternative workflows for generating improved regional products. These products will further be used to improve greenhouse gas budgets and to study effects on human health and ecosystem services.
There's an urgent need to catalyze economic incentives towards the regeneration of the planet and monitor the changes in the ecological state of ecosystems in a reliable, approachable and scalable way.
Soil organic carbon (SOC) sequestration in regeneratively managed rangelands can provide much needed contributions to the global carbon drawdown. However, methods to track changes in SOC over time often rely on either (1) intensive soil sampling which often prove cost prohibitive for land stewards, or (2) biogeochemical models that need local calibration (not always available) and typically lack uncertainty estimates. In an attempt to overcome these limitations and create a cost-effective approach to SOC stock estimation, we designed an open source methodology which uses statistical models to uncover correlations between Sentinel-2 satellite imagery and ground truth data to estimate soil carbon at unsampled locations. Through this methodology, the calibration of an image becomes possible when a significant correlation is found between the spectral values of the image at the sampled locations and the SOC concentrations, within a few months around the sampling date. Then, SOC% maps are generated and converted into stocks, using bulk density measurements. Finally, the changes in time of the carbon stocks are estimated from the difference between stocks from consecutive sampling rounds. The final creditable carbon change is a result of the change in the SOC stocks minus the GHG emissions from the cattle for the crediting period. An uncertainty discount is also applied if uncertainty is higher than 20%. In addition to a quantification of the carbon stock changes in time, the methodology includes an estimation of several co-benefits (soil health, ecosystem health and animal welfare) that help expand the analysis beyond solely carbon.
The method was used to estimate the annual changes in the SOC stocks and co-benefits at three rangelands under prescribed grazing located in New South Wales, Australia. Correlations were explored through linear and power regressions, as well as machine learning algorithms (e.g. Random Forest regression). The results with the highest accuracy were used to issue carbon credits that were sold to the voluntary carbon markets, and the data made public for others in the science community to test and explore other statistical or geospatial modeling techniques
We conclude that there is potential to leverage satellite remote sensing technology to measure changes in carbon stock over time in combination with significantly fewer sample points for training and verification than required for conventional carbon stock mapping. Yet we recognize the need to assess the strengths and limitations of this nascent technology by testing it across a variety of different environmental conditions and at different spatial and temporal scales. We expect this methodology to be widely tested and upgraded by the scientific community. Our foremost goal is to inspire and guide efforts in the development of high quality methods that leverage the best of technology and scientific knowledge in service to reliable carbon accounting.
Intensifying wildfires in high-latitude forest and tundra ecosystems are a major source of greenhouse gas emissions, releasing carbon through direct combustion and long-term degradation of permafrost soils and peatlands. Several remotely sensed burned area and active fire products have been developed, yet these do not provide information about the ignitions, growth and size of individual fires. Such object-based fire data is urgently needed to disentangle different anthropogenic and bioclimatic drivers of fire ignition and spread. This knowledge is required to better understand contemporary arctic-boreal fire regimes and to constrain models that predict changes in future arctic-boreal fire regimes.
Here, we developed an object-based fire tracking system to map the evolution of arctic-boreal fires at a sub-daily scale. Our approach harnesses the improved spatial resolution of 375m Visible Infrared Imaging Radiometer Suite (VIIRS) active fire detections. The arctic-boreal fire atlas includes ignitions and daily perimeters of individual fires between 2012 and 2021, and may be complemented in the future with information on waterbodies, unburned islands, fuel types and fire severity within fire perimeters.
Abstract
Development of Earth Observation tools with capacity to verify extraction of CO2 from the Atmosphere has high potential to help companies and societies to develop carbon free products and incentives to make it possible for farmers to speed up new types of carbon farming praxis´s.
The presented project has studied three large cereal producing farms in southern Sweden and has applied a long term farm perspective. Combined measurements in Sentinel data of soil organic carbon (SOC) in top soils and vegetation indexes (NDVI) has been used to investigate how the relationship between increased biomass production and possible improvement of soil organic carbon appear on field and farm level.
Most recent research shown which changed farming praxis has potential to increase carbon storage in soils. This research points to long term crop rotation with two to three year lay and cultivation of pees and other nutrient fixation crops in-between are favorable to increase carbon storage. In this paper we present two farms which has changed praxis after 2017 and one farm which uses conventional best praxis.
Sentinel 2 data is used, and we found it possible to get 20-30 cloud free images per year over the studied farms, only 1-2 images per year shows bear soils, which means that we have a good data provisioning for NDVI measurements while the SOC needs to rely very temporal data points. Mean values of NDVI and SOC for each individual field is calculated and use to construct a soil organic carbon curve and vegetation growth curve for 5 years.
Combining the indexes produces from satellite images, with profiles of differences in crop rotation for each field and physical soil samples in soils are used in multivariate analysis to investigate the robustness in the relationship between SOC and NDVI.
Detailed results is presented and further developments proposed.
Observations of upper atmospheric neutral mass density and wind are critical to understand the coupling mechanisms between Earth’s ionosphere, thermosphere, and magnetosphere. The ongoing Swarm DISC (data, innovation, and science cluster) project TOLEOS (thermosphere observations from low-Earth orbiting satellites) aims to provide accelerometer-derived neutral mass density and crosswind data from CHAMP, GRACE, and GRACE-FO satellite missions covering a time span of approximately 22 years. The project uses state-of-the-art models, calibration techniques, and processing standards to improve the accuracy of these data products and ensure inter-mission consistency. Here, we present preliminary results of the quality of the data in comparison to the high accuracy drag temperature model DTM2020 and physics-based TIE-GCM (thermosphere ionosphere electrodynamics general circulation model), and CTIPe (coupled thermosphere ionosphere plasmasphere electrodynamics) models. We present, for the first time, a comparison of GRACE and GRACE-FO neutral mass densities with ESA’s Swarm mission during a few time periods where the orbital planes of the satellites align with each other. The study also provides a comparison of these new neutral mass densities and neutral winds across multiple periods with vastly different solar and geomagnetic activities.
Topside Ionosphere Radio Observations from multiple Low Earth Orbiting (LEO)-missions (TIRO) is a project in ESA’s Swarm Data, Innovation, and Science Cluster (DISC) framework. TIRO extends Swarm Total Electron Content (TEC) products with data from other LEO satellites and provides high-accuracy topside TEC from dual-frequency GPS receivers onboard CHAMP (2000-2010), GRACE (2002-2017), and GRACE Follow-On (since 2018) missions. Special emphasis is put to ensure maximum consistency between the operationally derived data sets for the Swarm and GOCE missions to allow for direct comparison. Moreover, GRACE and GRACE-FO are equipped with a K-Band inter-satellite Ranging System (KBR), which in turn is used to derive an estimate of the in-situ electron density. With all the satellites considered, altitude regions from as low as 250 km (GOCE) up to nearly 500 km (GRACE-FO) are covered.
The additional data ensures continuous electron density and TEC observations from multi-LEO satellites spanning a period of almost two full solar cycles. Having the overlaps between the different satellite missions, the constellation aspect achieved by the multi-mission coordination for monitoring ionospheric phenomena can be exploited. We will present both, climatological studies of TEC and electron density and short-term variations, that can only be accessed by constellations. By this, we will illustrate the consistency and sensitivity of the newly derived data set.
Among Space Weather effects, the degradation of air traffic communications and satellite-based navigation systems are the most notable. For this reason, it is of uttermost importance to understand the nature and origin of ionospheric irregularities that are at the base of the observed communication outages. Here we focus on polar cap patches (PCPs) that constitute a special class of ionospheric irregularities observed at very high latitudes in the F region. To this purpose, we use the so-called PCP flag, a Swarm L2 product that allows locating PCPs. We relate the presence of PCPs to the values of the first- and second-order scaling exponents estimated from Swarm A electron density fluctuations and to the values of the Rate Of change of electron Density Index (RODI).
The results of our analysis, covering a time interval of approximately 3.5 years since the 1st of July 2014, show that values of RODI and of the first- and second-order scaling exponents corresponding to measurements taken inside PCPs, are clearly different from those corresponding to measurements outside PCPs. Moreover, the values of the first- and second-order scaling exponents suggest the turbulent nature of PCPs.
This work is supported by Italian PNRA under contract PNRA18_00289-A “Space weather in Polar Ionosphere: the Role of Turbulence".
The Global Positioning System (GPS) Attitude, Positioning, and Profiling (GAP) instrument is one of eight components of the scientific instrument suite onboard the Swarm-E satellite (previously CASSIOPE/e-POP). The Swarm-E instrument suite was designed to primarily study the physical processes coupling the polar ionosphere to the solar wind and magnetosphere, and the ionospheric structure and dynamics associated with this coupling. The GAP instrument consists of three GPS antennas oriented towards spacecraft zenith and one antenna oriented in the anti-ram direction, along with associated GPS receivers. This configuration allows for both radio occultation and topside ionosphere measurements, which are collected at data rates of up to 100 Hz. The elliptical, polar orbit of Swarm-E and high data rate of GAP allows for unique radio occultation and topside ionosphere observations, which is particularly useful for polar regions where much of the ionosphere structure and dynamic behaviour associated with SW-M-I-T coupling are not well observed or understood.
The presentation will discuss recent reprocessing of GAP data and ongoing research projects employing the GAP dataset. Eight years of GAP data (starting in September 2013) have been reprocessed, with calibrated line-of-site total electron content (TEC) currently available on the epop-data website (https://epop.phys.ucalgary.ca/data/). Higher level products for topside vertical TEC and electron density profiles will be available in the near future. Topside TEC measurements of GAP are currently used to observe the topside electron content in the polar regions, including the statistical study of high altitude (>1000 km) topside TEC enhancements. Concurrent observations of the Imaging and Rapid-scanning ion Mass spectrometer (IRM) of Swarm-E may provide insight into possible plasma upflow/downflow associated with these enhancements. Also ongoing are statistical studies of ionospheric plasma structures with 100s of kilometer down to sub-kilometer spatial scales. This includes analysis of topside irregularities using the zenith-oriented GAP receivers, as well as observation of the vertical structure of irregularities using the GAP occultation receiver. Climatology of observed irregularities, including links to solar wind and geomagnetic activity levels will be discussed.
The electron temperature observations taken by the Swarm constellation often show spikes and/or time series characterized by fluctuations and very high values, well above the expected ionospheric background. Different “families” of such occurrences can be recognized: one family of spikes most likely constitutes an artifact due to a combination of instrumental and local environmental effects and it affects specific portions of orbits in particular conditions when the solar panels are illuminated by the Sun; another family of high temperature values is instead typical of high latitudes and nocturnal local times, often associated with very low values of the electron density. In this study, we aim at selecting and characterizing a number of events of this second family, looking also at other parameters measured by Swarm satellites at the same time, such as field-aligned currents density and local plasma velocity.
The Radio Receiver Instrument (RRI) on the Enhanced Polar Outflow Probe (e-POP; also known as Swarm-E) has been delivering high-quality and insightful measurements of natural and artificial radio emissions from low-Earth orbit since November 2013. RRI is a digital radio receiver which can operate between 10 Hz and 18 MHz, sampling at a rate of 62.5 kHz. To date, RRI has performed over a thousand measurements, the majority of which are divided between observations in the Very Low Frequency (3 – 30 kHz) and High Frequency (3 – 30 MHz) portions of the radio spectrum.
In this presentation, we will provide an update on RRI’s scientific activities. We will give a high-level overview of recently published results, measurement campaigns, and ongoing scientific efforts. In particular, we will discuss the methodology and outcomes of RRI’s HF eclipse observation campaign, which will take place in the weeks around the December 4, 2021, total solar eclipse in the southern hemisphere. In that campaign, RRI will target a ground-based HF transmitter located in Antarctica to study the effects of the eclipse on the coupled ionosphere-thermosphere system.
We will also discuss the progress on a multi-year RRI data analysis project to study HF scintillation at high latitudes in the Canadian sector. The major of RRI’s HF operations have been organized experiments between RRI the Super Dual Auroral Radar Network (SuperDARN) systems located at Saskatoon, Rankin Inlet, and Clyde River (all in Canada). The project goals are to use RRI data from the SuperDARN experiments to specify the nature of HF scintillation in the region, diagnose scintillation caused by ionospheric irregularities and distinguish it from scintillation resulting from HF radio propagation effects, identify geophysical phenomena responsible for HF scintillation, and ascertain the relationship (if any) between HF scintillation and backscatter measured by the SuperDARN systems.
Characterising the ionospheric electron density (Ne) and temperature (Te) is fundamental to study the physical and dynamical properties of the ionospheric plasma. Indeed, in a collisional inhomogeneous plasma crossed by electric and magnetic fields, plasma constituents densities and temperatures significantly affect the plasma distribution function f(r,v) in the phase space (r,v). The Langmuir Probes on board the European Space Agency Swarm satellites, providing in-situ simultaneous observations of both Ne and Te at 2-Hz rate, offer the valuable opportunity to investigate some properties of the topside ionospheric plasma in a very detailed way, thanks to the wide dataset currently available covering different spatial, diurnal, seasonal, and solar activity conditions. In this study, Ne and Te observations collected by Swarm satellites in the period 2014 - 2021 are used to highlight the main statistical properties of their correlation. Pearson correlation coefficient values are calculated and binned as a function of the magnetic Quasi-Dipole latitude and Magnetic Local Time coordinates, for different geophysical conditions, and the corresponding results are shown as maps.
The ionospheric irregularities, which are plasma density variations occurring on scale sizes ranging between a few meters and hundreds of kilometers, are one of the natural factors that affect electromagnetic signals propagating through the ionosphere. For this reason, they can contribute to the malfunctioning of Global Navigation Satellite Systems (GNSS) hindering their accuracy and reliability.
In the past, many studies related to plasma density irregularities were carried out using data recorded by ground-based instruments or instruments installed on board rockets and satellites. Recently, interesting results have been obtained by analyzing measurements from the European Space Agency Swarm constellation. Measurements of magnetic field and electron density, along the orbits of Swarm satellites, have been used to address the scaling properties of their fluctuations and to unveil some interesting features of ionospheric dynamics. These studies have demonstrated the existence of a class of plasma density irregularities characterized by both fluctuations and an energy spectrum supporting the role of turbulent processes at their origin. In addition, these studies also showed that this class is always associated with very high values of the Rate Of change of electron Density Index (RODI), which is a proxy of the fluctuations intensity characterizing the ionospheric medium. This implies that, among all the possible ionospheric irregularities, those due to turbulent processes seem to be always accompanied by plasma density variations stronger than those generated by other mechanisms.
Here, we use data recorded on board one of the three satellites of the Swarm constellation (namely, Swarm A) from 1st April 2014 to 31st March 2018 to assess the possible dependence of the Global Positioning System (GPS) signals loss of lock on the presence of this specific kind of ionospheric irregularities, and thereby to shed some light on the origin of one of the largest Space Weather effects on the GNSS. Using measurements recorded by the Swarm A Langmuir probes and GPS Precise Orbit Determination antennas, we study the scaling features of the electron density fluctuations through the structure function analysis simultaneously to the occurrence of loss of lock events. We find that the plasma density irregularities characterized by turbulent features and extremely high values of the RODI can lead to GPS loss of lock events. This result is extremely significant because it could pave the way for a possible prediction of such events, with a consequent mitigation of their adverse effects.
Ionospheric plasma dynamics at high latitude can play a key role in the understanding of the ionosphere-magnetosphere coupling processes. Whereas the statistical patterns of main ionospheric current systems and magnetic field-aligned currents have been widely studied, others current systems have not yet been established in detail. This is the case of pressure-gradient current, which can develop in the F region of the ionosphere. Such current is among the weaker ionospheric current systems arising from plasma pressure variations. Indeed, due to the coupling between geomagnetic field and plasma pressure gradient, electrons and ions drift in opposite directions, perpendicularly to the ambient magnetic field and the pressure gradient, generating an electric current whose intensity is of the order of a few nA/m2. This current is also called diamagnetic, because it produces a magnetic field which is oriented oppositely to the ambient magnetic field, causing its reduction inside the plasma. The magnetic reduction can be revealed in measurements made by low-Earth orbiting satellites when pass through ionospheric plasma regions where rapid changes in density occur. Anyway, identifying diamagnetic current by using its magnetic signature is not easy due to the weak intensity of generated magnetic perturbation, that is about 10,000 time smaller than the ambient geomagnetic field. That is why studies investigating this current are relatively recent, since high-accuracy satellite magnetic field measurements are available. Due to its origin, it can be revealed at both low and high latitudes and more generally in all those regions where the plasma pressure gradients are greatest. In the recent past, most studies have focused on low latitude, in the equatorial belt, where this phenomenon has been extensively studied. Conversely, only a few papers have focused on high latitudes where these currents although weak may pose additional challenge seen they seem to appear preferentially at the same geographic locations.
Here, using magnetic field, plasma density and electron temperature measurements recorded on board ESA Swarm constellation from April 2014 to March 2018, we reconstruct the flow pattern of the pressure-gradient current at high-latitude ionosphere in both hemispheres, and investigate its dependence on geomagnetic activity, seasonal and solar forcing drivers. The obtained results can be used to correct magnetic field measurements for diamagnetic current effect, to improve modern magnetic field model, other than understanding the impact of ionospheric irregularities on ionospheric dynamics at small-scale sizes of a few tens of kilometers.
Joule heating in the thermosphere occurs when electric fields transformed into
the local reference frame of the neutral gas are non-zero. This is also the
condition for having electric currents according to the well-known Ohm's law
for the ionosphere. A prominent cause of such current driving electric fields
is magnetosphere-ionosphere coupling at high latitudes. Also the atmospheric
dynamo is known to drive currents. For example, at mid-latitudes the Sq
currents are dominating in geomagnetically quiet periods. Sq is driven by tidal
winds, and so mechanical energy is converted to electriciy and ultimately to
heat because the ionosphere is a dissipative medium. Also gravity (buoyancy)
waves involve neutral motions and can constitute a dynamo. The electric
currents arising from the dynamo in turn affect the neutral dynamics via
Lorentz (jxB) forcing or, equivalently, ion drag.
To obtain a general description of the coupling between neutrals and the
ionospheric plasma we present an atmospheric dynamo equation where the
well-known Pedersen and Hall conductivities appear. The derivation is based on
a paper by Parker (1996). A dynamo effect occurs when ∇x(uxB)≠0, where u is the
neutral wind and B the magnetic field. Because the conductivity parallel to B
is orders of magnitudes higher than the Pedersen and Hall conductivities, the
condition is approximately that if uxB is not constant along magnetic field
lines, then dynamo electric fields drive currents. Since gravity waves are a
result of non-electrodynamic forces, generally their uxB varies along magnetic
field lines, and dynamo effects are produced when they propagate into the
dynamo regions of the lower thermosphere.
We estimate that the tidal Sq dynamo globally dissipates roughly a power of 2 GW
quasi-permanently. Also the electrodynamic dissipation by medium and small
scale gravity waves propagating from the mesosphere into the lower thermosphere
could be a significant source of heat. Unlike the current systems coupling the
magnetosphere and ionosphere which are observed by satellites like Swarm, the
currents of a gravity wave dynamo are confined to the lower thermosphere and
can only be observed with a very low orbiting satellite or sounding rockets.
Small-scale ionospheric structures are known to cause rapid fluctuations of the phase and amplitude of trans-ionospheric radio signals. For example, they can significantly degrade the performance of the Global Navigation Satellite System (GNSS) services, and under severe ionospheric conditions, these services can be totally unavailable. Therefore, it is of practical importance to forecast the severity of ionospheric irregularities. In the project - Forecasting Space Weather in the Arctic Region (FORSWAR) - we develop a new advanced space weather forecasting model for the satellite-based Positioning, Navigation and Timing (PNT) users in the Arctic with a focus in the Greenland area. The new model is based on optical flow image processing technique (Monte-Moreno et al., 2021), and it is able to predict the space weather condition in terms of rate of change of total electron content index (ROTI) in horizons of 15 minutes to 6 hours. The outputs of the model are validated through various GNSS positioning models (e.g., Single Point Positioning technique, Precise Point Positioning, and Real Time Kinematics) as well as the instantaneous ionospheric perturbation indices (Gradient Ionosphere index and the Sudden Ionospheric Disturbance index). In addition, the results are also cross-compared with the in-situ observations from Swarm satellites. The validation results suggest the good performance of the model in predicting the polar ionospheric irregularities. By incorporating the real-time GNSS data, this model is suitable for the implementation of the real-time space weather application in the polar region, and it can contribute to increased resilience to adverse space weather effects for PNT users.
Reference:
Monte-Moreno, E., Hernandez-Pajares, M., Yang, H., Rigo, A. G., Jin, Y., Høeg, P., Miloch W.J., Wielgosz P., Jarmołowski, W., Paziewski, J., Milanowska, B., Hoque, M., & Orus-Perez, R. (2021). Method for Forecasting Ionospheric Electron Content Fluctuations Based on the Optical Flow Algorithm. 10.1109/TGRS.2021.3126888, IEEE Transactions on Geoscience and Remote Sensing.
The ionosphere is a highly dynamical system that shows a complex behaviour due to its nonlinear coupling with the solar wind-magnetosphere system from above and with the lower atmosphere from below. Such a complexity of the ionospheric plasma manifests itself on a largely varying range of spatial and temporal scales. We investigate how the different scales of the in-situ electron density recorded at altitudes of Swarm constellation behave according to the various conditions of the geospace. This, with the goal of finding if the topside ionosphere reacts to an external perturbation as a whole or by activating some peculiar modes.
In this regard, the present study aims at quantifying the spatio-temporal variability in the topside ionosphere by leveraging on the Fast Iterative Filtering (FIF) technique. FIF can provide a very fine time-frequency representation, as it decomposes any nonstationary, nonlinear signals, like those provided by Langmuir probes onboard Swarm, into oscillating modes, called intrinsic mode components or functions (IMCs or IMFs), characterized by its specific frequency.
The instantaneous time-frequency representation is provided through the so-called “IMFogram” which illustrates the time development of the multi-scale processes. These IMFograms, similarly to spectrograms, have the potential to show the greater details of the scale sizes which intensify during the various phases of geomagnetic storms, as reported during the recent 2015 St. Patrick’s day storm. Scope of the study is also to illustrate how the analysis based on the use of FIF and IMFograms provide better performance with respect to similar study conducted via Fourier and discrete wavelet transform, by improving the scale resolution.
With this work, we also aim at supporting the development of advanced models of ionospheric plasma variability based on Swarm datasets.
This work is performed in the framework of the Swarm Variability of Ionospheric Plasma (Swarm-VIP) project, funded by ESA in the “Swarm+4D-Ionosphere” framework (ESA Contract No. 4000130562/20/I-DT).
The ionosphere is a dynamical system exhibiting nonlinear couplings with the other “spheres” characterizing the geospace environment. Such nonlinearity manifests also through the non-trivial, scale-dependent, time delays in the cause-effect chain characterizing the Solar Wind-Magnetosphere-Ionosphere coupling.
The present study uses the Intrinsic Mode Cross Correlation (IMXC): a novel scale-wise signal lag measurement. The method performance is evaluated first on known artificial signals and then applied to ionospheric data, including in situ electron density from Swarm constellation. The IMXC relies on non-linear non-stationary signal decomposition provided by the novel Multivariate Fast Iterative Filtering (MvFIF) technique, which identifies the common scales embedded in the signals. The lags are then obtained scale-wise, enabling the identification of the lag dependence on the involved spatio/temporal scales for the artificial data set (even in presence of high levels of noise), and to estimate them in a real life signal. The lags obtained can separate the scales on which coupling inherently occurs according to the physical reasoning from scales related only to internal fluctuations. This can pave the way to future uses of this technique in contexts in which the causation chain can be hidden in a complex, multiscale coupling of the investigated features.
As the first real-life scenario assuming cause-effect relationship, we use the closely-separated measurements of the European Space Agency’s Swarm Alpha (A) and Charlie (C) satellites with identical Langmuir probe instruments sampling the ionospheric plasma density in the topside ionosphere with latitudinal orbital separation representing a lag of about 8.8 s between the two satellites. Examples of additional applications to ionospheric science are also reported to demonstrate the usability of the technique in the Space Weather context.
This work is performed within the Swarm Variability of Ionospheric Plasma (Swarm-VIP) project, funded by ESA in the “Swarm+4D-Ionosphere” framework (ESA Contract No. 4000130562/20/I-DT).
As enhancement of the repertoire of operational services for the ESA Space Safety Programme, we develop for the Expert Service Center of Ionospheric Weather a novel forecasting model called SODA (Satellite Orbit DecAy). The service development is carried out in a joint project between the University of Graz and the Graz University of Technology and deals with the prediction of thermospheric variations and the subsequent effects on low Earth orbiting satellites (LEOs).
Geomagnetic storms occur rather consistently in accordance with the 11-year solar cycle and have the capability to trigger atmospheric disturbances and subsequently influence the trajectories of Earth orbiting satellites. The strongest disturbances of the space environment are primarily caused by coronal mass ejections (CMEs). To receive information about the magnitude of the Earth’s upper atmosphere response due to such solar events we calculate thermospheric densities based on scientific data, such as kinematic orbit information and accelerometer measurements. Depending on the degree of the density variation during a CME it is possible to estimate the occurring satellite orbit decay. The key element of the SODA is now to develop a forecasting model in order to predict the expected impact of solar events on satellite missions like Swarm or GRACE-FO. Even though, these missions are orbiting at the upper boundary of the Earth’s thermosphere, severe CMEs may trigger orbit decays in the order of several tens of meters. For LEO satellites at lower altitudes the effect of a single event may even exceed the 100m level. The forecasting tool is based on a joint analysis and evaluation of solar wind plasma and magnetic field measurements at L1 from the ACE and DSCOVR satellites as well as thermospheric neutral mass densities. By taking into consideration the varying propagation speeds of CME’s and the response time of the thermosphere, the lead time for the start of the atmospheric perturbation will be up to several hours. In this contribution we present the latest scientific developments within SODA and show the current status of the online presentation of the envisaged forecasting tool.
Auroral particle precipitation potentially plays a main role in ionospheric plasma structuring. The impact of particle precipitation on plasma structuring are investigated using multi point measurements from scintillation receivers and all sky imagers on Svalbard. This provides us with the unique possibility of studying the auroral dynamics in a spatial and temporal evolution.
We consider three substorm events to investigate how auroral forms impact on transionospheric radio waves. It is observed that elevated phase scintillation indices correspond best to the spatial and temporal evolution of auroral forms when both projected to the estimated green emissions altitude (150 km). This suggests that plasma structuring in the ionospheric E-region is an important driver for phase scintillations.
We demonstrate that plasma structuring impacting the GNSS signals is largest at the edges of the auroral forms. Studying an arc in detail, only poleward edges are associated with elevated phase scintillation indices, whereas for auroral spiral and band the structuring is attributed to all boundaries. There is a time delay (1-2 min) shown for the temporal evolution of aurora (e.g., commencement and fading of auroral activity) and elevated phase scintillation index measurements. This can be due to the intense influx of particles, which increase the plasma density and cause the recombination to carry on longer, which may lead to a memory effect. The irregularities and instabilities causing the elevated phase scintillation indices especially in the E-region may be due to e.g., field-aligned currents, Kelvin-Helmholtz instability or Farley-Buneman instability. The auroral fine structure and forms may be controlled by kinetic instabilities, such as Alvén waves, acoustic waves. Th e nature of the effects is studied using the ionospheric-free linear combination to understand whether this is a refractive or diffractive effect. This study can contribute to the development of models of ionospheric plasma irregularities and related space weather effects in the polar regions.
The ESA’s Swarm satellites (launched in November 2013) are equipped with accelerometers and Langmuir probes, which provide the opportunity to observe thermosphere and ionosphere disturbances simultaneously. This unique feature is explored here through a novel ensemble Kalman filter (EnKF)-based calibration and data assimilation (C/DA) technique to tune empirical or physics-based models and improve their now-casting and forecasting skills. The advantage of C/DA is that not only updates models states, but also it calibrates its key parameters, where the latter can be applied to estimate the global and multi-level thermospheric and ionospheric variables. Therefore, the spatial coverage of these estimates is not limited to the satellites’ ground-track coverage. In this study, the C/DA technique is applied on the NRLMSISE-00, which is an empirical model of thermosphere, using the thermospheric neutral density (TND) estimates derived from Swarm satellites and the re-calibrated model is called C/DA-NRLMSISE-00. Then, to find the coupling (or ion-neutral interactions) between thermosphere and ionosphere system, the coupled physics-based model of TIE-GCM is run by replacing the thermospheric constituents such as O2, O1, He and neutral temperature from C/DA-NRLMSISE-00 in the primary history files of TIE-GCM. Then, Swarm-derived Electron densities are used as assimilated observations into TIE-GCM to make the use of directly observed ionosphere variables. In order to find the impact of purposed method on forecasting thermosphere-ionosphere variables, it is essential to validate whether the TIE-GCM after data assimilation, is named here as ‘TIE-GCM-DA’, can improve the thermosphere-ionosphere parameters that were not used in the C/DA of NRLMSISE-00 and DA of TIE-GCM procedures. Thus, here, the TND estimates from TIE-GCM-DA are comapred against GRACE and GRACE-FO measurements, and the estimates of electron density and total electron content are evaluated against independent radio occultation and GNSS measurements. The numerical results indicate that indeed the C/DA is effective for short-term global forecasting and can be explored in operational studies.
The polar ionosphere is littered with plasma density structures on scales from hundreds of kilometres down to several meters. It is believed that this structuring is primarily driven by energy input from the magnetosphere as a result of the large-scale magnetosphere/solar wind coupling. The study of small-scale (sub-kilometre) plasma density structures in the ionosphere is important because they can severely impact the quality of trans-ionospheric radio waves such as those used in global navigation satellite systems (GNSS). Here we present results from the multi-needle Langmuir probe (m-NLP) system. Typically, Langmuir probes operate by sweeping through a range of bias voltages in order to derive the plasma density, a process that takes time and hence limits the temporal resolution to a few Hz. However, the m-NLP operates with fixed bias voltages, such that the plasma density can be sampled at several kHz, providing a spatial resolution finer than the ion gyroradius at orbital speeds. In particular, we present results from sub-kilometre plasma density structuring in the polar cusp region, and its relation to GNSS signal scintillations. We study the connection both based on case studies, present statistics, and also employ models based on the idea of the ionosphere as a phase screen. We show that the in situ plasma density measured can be related to scintillation measurements on the ground.
Swarm satellites mission is actively used to conduct various studies of the ionosphere, focusing on such aspects as the electric and magnetic fields or plasma temperature, structuring and irregularities. We use a global product based on the Swarm satellite measurements that characterizes ionospheric irregularities and fluctuations. The IPIR (Ionospheric Plasma IRregularities product) provides characteristics of plasma density structures in the ionosphere, of plasma irregularities in terms of their amplitudes, gradients and spatial scales and assigns them to geomagnetic regions. Ionospheric irregularities and fluctuations are often the cause increases the error in position, velocity, time determination based on Global Navigation Satellite Systems (GNSS), which signals pass through the ionosphere. So IPIR also provides an indication, in the form of a numerical value index, on their severity for the integrity of trans-ionospheric radio signals and hence the accuracy of GNSS precise positioning.
In this study, we are comparing two datasets from Swarm satellites (with 1-second resolution) and from ground-based scintillation receivers (with 1-minute resolution). First, we need to find time intervals when the Swarm satellites pass over the field-of-view of the ground-based GPS receiver. To calculate these passes, a geometry with an elevation angle of 30° above the receiver was used. Second, to compare the characteristics of electron density fluctuations from Swarm with ground-based scintillation data, we performed an azimuthal selection of the GNSS data according to Swarm satellite fly. Only those GNSS satellites are taken into account that are near the position of the Swarm satellite (azimuth ±10°). We provide validations of the IPIR product against the ground-based measurements, focusing on GPS TEC and scintillation data in low and high-latitudes regions in different longitudinal sectors. We calculate median, mean, maximum and standard deviation of parameter’s values for both datasets for each conjunction point. We observe a weak trend of stronger scintillations with an increasing IPIR index, where the IPIR index presents a product of amplitudes and temporal variations in plasma densities.
In this presentation, European MUF(3000) nowcasting and forecasting products in high frequency communications (HF COM) domain developed by INGV are presented. These are developed in form of maps over Europe of MUF(3000) and its ratio with respect to a proper background. The maps have different extension and spatial resolution and are designed to immediately detect regions of post-storm depression for Space Weather (SW) applications.
The nowcasting products are based on real-time maps updated every quarter of an hour and covering the European sector with extension 12°W-45°E; 32°N-72°N. The mapping procedure makes use of the available real-time ionosonde measurements in different locations, and the ordinary kriging technique for spatial interpolation in order to upgrade IRI-CCIR-based background maps in a regular grid with fine spatial resolution. The forecasting procedure product consists in real-time maps updated every hour and covering a geographic area extending 20°N-80°N; 40°W-100E°. The mapping procedure makes use of both historical and real-time available hourly foF2 observations, forecasted 3-hour ap indices as driving input parameter from NOAA (National Oceanic and Atmospheric Administration - USA), and effective ionospheric monthly T from BOM (Bureau of Meteorology – Australia) indices to specify the background level of the available real-time ionosonde measurements in different locations. Local prediction models have been created for each European ionospheric station, and the results are extended over the whole geographic area applying a multiquadratic technique.
Several tests were conducted comparing model predictions and actual observations to evaluate the performance of the methods during some Space Weather events, relevant to users. The results obtained are summarized here and briefly discussed.
The HF products are part of the INGV contribution to the SWESNET (Space Weather Service Network) project initiated by ESA, and are provided operationally since November 2019 to ICAO in the frame of PECASUS consortium activities for the mitigation of SW effects for civil aviation purposes.
The space geodetic techniques operating in the radio frequency range, such as Very Long Baseline Interferometry (VLBI) and Global Navigation Satellite System (GNSS), are sensitive to the Total Electron Content (TEC) of the ionosphere. Moreover, the precision of the techniques depends on the quality of the estimated values of the ionosphere. For accurate positioning, the GNSS requires good prediction of the TEC values. Inaccurate estimation of the TEC in VLBI also degrades the accuracy of the estimated geodetic parameters, such as ground based antenna’s coordinates, Earth Orientation Parameters, etc. There are a number of global TEC models based on the GNSS observations, designed to describe the global conditions of ionosphere in terms of TEC. We have conducted a comparative study of the two selected global TEC maps with the results from the observations of the VLBI Global Observing System (VGOS). VGOS network has been established recently and it is continuously growing. The estimated differential TEC (dTEC) from VGOS data has high precision with the formal error of dTEC of about 0.01-0.2 TECU. It can be used in evaluation of the global TEC maps, as well as an additional data source for the further improvement of the TEC map models.
Precision of the estimated dTEC with VGOS has been improved considerably compared to the traditional geodetic dual-band VLBI observations. We have compared VGOS ionosphere product with the dTEC calculated using global ionosphere TEC maps. For analysis, we selected two TEC global models, CODE GIM and Neustrelitz TEC Model Global (NTCM-GL). The comparison was performed for the VGOS observations made in 2019-2020. We found a good agreement between VGOS dTEC and dTEC obtained using global TEC maps, however an offset between two datasets is detected. The comparison also reveals weakness of the global TEC models in some locations, such as remote islands, where the number and distribution of the ground based GNSS antennas are limited. The VGOS data can be considered as an additional information source and, hence, they can be used for the further improvement of the global TEC models.
Ground-based indices, such as the Dst, ap and AE, have been used for decades to describe the interplay of the terrestrial magnetosphere with the solar wind and provide quantifiable indications of the state of geomagnetic activity in general. These indices have been traditionally derived from ground-based observations from magnetometer stations all around the Earth. In the last 7 years though, the highly successful satellite mission Swarm has provided the scientific community with an abundance of high quality magnetic measurements at Low Earth Orbit (LEO), which can be used to produce the space-based counterparts of these indices, such the Swarm-Dst, Swarm-ap and Swarm-AE indices. In this work, we present the first results from this endeavour, with comparisons against traditionally used parameters. We postulate on the possible usefulness of these Swarm-based products for a more accurate monitoring of the dynamics of the magnetosphere and thus, for providing a better diagnosis of space weather conditions.
Lightning whistler trains consisting of more than twenty individual lightning whistlers were recorded at the Kannuslehto ground station in Finland (67.74N, 26.27E; L = 5.5) on 7 January 2017 from 7:35 to 8:35 UT. Shorter lightning whistler trains appeared from 5:44 to 6:27 UT. Using the World Wide Lightning Location Network (WWLLN) data, we have identified causative lightning strokes for the observed whistler trains and found them to occur during a winter thunderstorm, which accompanied the arrival of the cyclone Axel to the Norwegian coast. Corresponding very low frequency (VLF) sferics were recorded at the Kannuslehto station in Finland but also at the LSBB (Laboratoire Souterrain à Bas Bruit) receiving station in Southern France.
Lightning whistlers were trapped in field-aligned density ducts, and each whistler bounced for 2-4 minutes during the interval from 7:35 to 8:35, when the energy of causative lightning strokes was on average 168 kJ. The whistler trains observed from 5:44 to 6:37 were shorter, lasting for 30-90 s, and were triggered by weaker strokes with an average energy of 39 kJ. We use the whistler inversion method in order to obtain plasmaspheric electron densities and McIlwain’s L parameter from the measured whistler data. We have found that the duct was composed of many paths spread from L=3.4 to 4.4, corresponding to a latitudinal range of 60°- 65°N. Strong lightning strokes occurred between 62.6° - 63.4°N, well within the latitudinal range of the duct. We conclude that observations of such long whistler echo trains are only possible when a long-lasting duct is formed, and at the same time, a thunderstorm below the ionospheric end of the duct produces very energetic lightning. These strokes then deliver enough energy to the magnetosphere to keep the whistlers bouncing in the duct for a long time.
ESA's SMOS mission was originally conceived to use L-band interferometry to map the concentration of salt in the oceans and the moisture of the soil. However, not only Earth, but also the Sun appears in the wide field of view of its 69 1.4 GHz receivers, making it one of the main source of noise in the image. Here we show how with the proper data processing it is possible to use the solar noise affecting SMOS observations to monitor the Sun for geoeffective coronal mass ejections and for solar radio bursts that could affect systems based on L-band radio signals.
We have found that SMOS detects different types of solar signals, including the progress of the 11-year activity cycle, the thermal emission from solar flares and solar radio bursts. Furthermore, we note that SMOS detects radio bursts only during flares associated with CMEs and that the size of the 1.4 GHz radio bursts correlates well with the speed, angular width and kinetic energy of these CMEs. This, together with the low-resolution solar images that SMOS is able to compute, it is therefore possible to make an early assessment of both the importance and the direction of the associated CMEs.
Moreover, systems based on radio frequencies are known to be affected by the kind of solar radio bursts that SMOS can detect. But despite the importance of nowcasting this radio bursts as a source of radio interferences, near real-time observations are still not easily available. The situation is not much better for post-event analyses, as solar radio observations usually do not include polarization. SMOS can be of use also for this purpose as it has been operating with full polarization since 2010 and provides data in near real-time. It and can therefore monitor interferences affecting navigation satellites (GPS, Galileo, GLONASS...), L-band air traffic control radars and radio communications.
The data in this study is from the SWADO (Space Weather for Arctic Defence Operations) network, consisting of seven GISTM (GNSS Ionospheric Scintillation and TEC Monitor) stations, which can utilize both Gallileo, GPS, GLONASS, and BeiDou. The stations are distributed along the coast of Greenland in Thule, Upernavik, Kangerlussuaq, Qaqortoq, Kulusuk, Scoresbysund, and Station Nord. This creates a chain of receivers along the west coast of Greenland, that follows one geomagnetic longitude. The stations on the east coast are placed to increase the data coverage above Greenland, and to be on geomagnetic latitudes corresponding to stations on the west coast. Due to this design, the SWADO network can be used to investigate the evolution of ionospheric GNSS scintillation events in time and space.
The primary type of scintillation in the Arctic is phase scintillation, and the σ_ϕ index is therefore used in this study. However, this index is based on GNSS raw data with a sampling frequency of 50-100 Hz. To increase the spatial data coverage the ROTI (Rate of TEC Index) was also considered, since ROTI can be based on GNSS data with a 1 Hz sampling frequency. Giving the possibility to include selected geodetic GNSS receives from GNET. ROTI indices based on 1 Hz data can not catch the same small-scale variations as the σ_ϕ index based on 50-100 Hz data but provides additional information for spatial and temporal interpolation.
This study can provide key information for mapping and short-term prediction of ionospheric GNSS scintillation events. This is crucial for the users of GNSS positioning and navigation in the Arctic where scintillation poses a significant threat since it can degrade the signal considerably, even to a degree where GNSS positioning is not possible. In the Arctic the satellite geometry already poses a challenge due to the high latitudes, which makes GNSS users more vulnerable to a loss of satellite signals on account of scintillation.
The SWADO network was established in the fall of 2021 and the study is therefore representative of a period with increasing solar activity since we are currently moving towards a solar maximum. Mapping and short-term predictions of GNSS disturbances are becoming more relevant and providing integrity information for Arctic GNSS users will become essential in the coming years.
Heliophysics, the science of understanding the Sun and its interaction with the Earth and the solar system, has a large and active international community, with significant expertise and heritage in the European Space Agency and Europe. Several ESA directorates have activities directly connected with this topic, including ongoing and/ or planned missions and instrumentation, comprising a ESA Heliophysics observatory: The Directorate of Science with Cluster, Solar Orbiter, SMILE and the Heliophysics archive; The Directorate of Earth Observation with Swarm and other Earth Explorer missions (including EE 10 candidate Daedalus); The Directorate of Operations with the L5 mission, Distributed Space Weather Sensor System (D3S) and the Space Weather Service Network; The Directorate of Human and Robotic Exploration with many ISS and LOP-Gateway payloads and the Directorate of Technology, Engineering & Quality with expertise in developing instrumentation and models for measuring and simulating environments throughout the heliosphere. The ESA Heliophysics Working group was formed to optimize interactions and to act as a focus for discussion inside ESA of the scientific interests of the Heliophysics community, including the European ground-based community and data archiving activities.
This paper will provide a brief introduction and description to the newly formed ESA Heliophysics working group, some of its planned activities (including work on the LOP-Gateway) and highlighting the benefits by using the continuing successful collaboration between Swarm and Cluster as a leitmotif.
The ionosphere is a highly complex plasma containing electron density structures with a wide range of spatial scale sizes. Large-scale structures with horizontal extents of tens to hundreds of km exhibit variation with time of day, season, solar cycle, geomagnetic activity, solar wind conditions, and location. Whilst the processes driving these structures are well understood, the relative importance of these driving processes is a fundamental, unanswered question. These large-scale structures can also cause smaller-scale irregularities that arise due to instability processes and which can disrupt trans-ionospheric radio signals, including those used by Global Navigation Satellite Systems (GNSS). Ionospheric effects pose a substantial threat to the integrity, availability and accuracy of GNSS services. Strategies to predict the occurrence of plasma structures are therefore urgently needed.
Swarm is ESA's first constellation mission for Earth Observation (EO). It initially consisted of three identical satellites (Swarm A, Swarm B, and Swarm C), which were launched into Low Earth Orbit (LEO) in 2013. The configuration of the Swarm satellites, their near-polar orbits and the data products developed, enable studies of the spatial variability of the ionosphere at multiple scale sizes. The technique of Generalised Linear Modelling is used to identify the dominant driving processes of large-scale structures in the ionosphere at low, middle, auroral and polar latitudes. The statistical relationships between the ionospheric structures and the driving processes are determined in each region and the variations between regions are discussed, with a particular focus on the European sector.
This work is within the framework of the Swarm Variability of Ionospheric Plasma (Swarm-VIP) project, funded by ESA in the “Swarm+4D-Ionosphere” framework (ESA Contract No. 4000130562/20/I-DT).
The aurora can be used as a direct way to observe particles precipitating into the ionosphere.
The main drivers behind this particle precipitation are geomagnetic substorms which can be divided into their three phases growth, expansion and recovery.
Energy is stored by coupling between the solar wind, interplanetary magnetic field and magnetosphere.
This energy is subsequently released in the Dungey cycle after which the magnetosphere returns to normal conditions.
Two of the easily observable differences in the aurora are their shape and latitude as well as a measurable difference in the Earth's magnetic field that occurs during a substorm.
For several decades all sky imagers have been placed in regions in Scandinavia, North America and Antarctica and have been taking images of the night sky every few seconds.
At the moment several million images are taken each year, and due to the large amount of images, only a fraction can be manually analysed.
Using transfer learning use a classifier based on a two step process where a pretrained neural network feature extractor transforms the images in a machine-readable numerical feature vector.
These features are later used for classification but have been shown to contain essential physical information embedded in the images.
Classification and clustering allows us to perform a large scale statistical analysis of the development of the aurora over several years.
Combining images with their respective measurements of the interplanetary magnetic field and locally measured disturbance in the Earth's magnetic field, we are able to query for certain conditions in a set of data spanning several hundred thousand of images taken in the last decade.
We are able to present a statistical analysis of how the aurora behaves depending on certain space weather conditions and with this knowledge open up new possibilities for the research and prediction of space weather.
The electron density controls all the ionospheric effects on propagating radio signals. Ionospheric imaging is a helpful technique for radio systems applications and understanding ionospheric electron density distributions. Total electron content (TEC) estimates from global navigation satellite systems (GNSS) have been extensively used to study characteristics of equatorial plasma bubbles (EPB). As they propagate across different altitudes of the ionosphere, GNSS signals allow a three-dimensional representation of the ionosphere using tomographic reconstruction techniques. Despite the progress made in recent years by, for instance, improving the ionospheric tomographic techniques or applying constrained methods, the incomplete geometrical coverage and the limited viewing angle of GNSS signals are still relevant challenges in the tomographic reconstruction of ionospheric electron density irregularities. In this study, we propose a method to identify geo-location of scintillation-inducing irregularities by producing quasi-tomographic images of the ionosphere obtained from various ionospheric GPS indices. The high-sample-rate GAP-O (GPS Attitude, Positioning, and Profiling - Occultation) receiver onboard the CASSIOPE/Swarm E satellite is mainly used for radio occultation measurements and its antenna is normally pointed in the horizontal direction. During several campaigns when flying in the equatorial region during post-sunset hours, we re-oriented Swarm E for short periods to direct the GAP-O receiver antenna to the vertical direction. Using a new quasi-tomographic technique, we reconstructed maps of EPB’s and equatorial ionospheric irregularities. The elliptical orbit of the satellite enables sampling at different altitudes. Our TEC maps detect the Appleton anomaly, which serves as a validation of the technique which we then generalize to map regions of intense, small-scale irregularities. In addition, according to our horizontal reconstructed maps, in-situ irregularities detected by the IRM (Ion Mass Spectrometer) instrument onboard CASSIOPE/Swarm E, were detected primarily when the satellite was passing close to the edge of large-scale plasma depletions, which were also associated with a large standard deviation of the rate of change of TEC (ROTI) extending to both sides in the zonal (east-west) direction.
In this study we investigate the variations of the hourly observations at the Ionospheric Observatory of Rome (41.82° N, 12.51° E) during the minimum of activity of the last solar cycles. In particular, the values of the critical frequency foF2 manually scaled from the ionograms recorded by the AIS-INGV ionosonde during the years 2007-2009 (between solar cycles 23 and 24) and 2018-2020 (between 24 and 25 ones) are considered. Each hourly deviation of foF2 greater than ± 15% with respect to a background level defined by 27-days running median values is here considered anomalous, defining positive and negative anomalies depending on the sign of the corresponding variation. The dependence of these strong variations on geomagnetic activity has been accurately investigated on the base of the ap geomagnetic index values within the previous 24 hours, according to the NOAA scales (from G0 to G5), and defining an additional class for ap≤7, considered representative of actually quiet conditions. Besides, the occurrence time of the anomalies has been also investigated to discriminate those originated during daytime or nighttime hours. The top level of geomagnetic activity reached during all the years was G2, except for 2018 when G3 level has been reached. A comparable number of both negative and positive ionospheric foF2 anomalies during the two solar minima were found, with total negative anomalies in a smaller number than the positive ones, as expected under low solar activity conditions. Some other main findings of this work are the small number of daytime negative foF2 anomalies, and the confirmation of the existence of two types of positive F2 layer disturbances, characterised by different morphologies and different underlying physical processes. A detailed analysis of some specific cases allows the definition of possible scenarios for the explanation of the mechanisms behind the generation of the foF2 anomalies.
Solar, auroral, and radiation belt electrons enter the atmosphere at polar regions leading to ionization and affecting its chemistry. Climate models with interactive chemistry in the upper atmosphere, such as WACCM-X or EDITh, usually parametrize this ionization and calculate the related changes in chemistry based on satellite particle measurements. Precise measurements of the particle and energy influx into the upper atmosphere are difficult because they vary substantially in location and time. Widely used particle data are derived from the POES and GOES satellite measurements which provide electron and proton spectra. These satellites provide in-situ measurements of the particle populations at the satellite altitude, but require interpolation and modelling to infer the actual input into the upper atmosphere.
Here we use the electron energy and flux data products from the Special Sensor Ultraviolet Spectrographic Imager (SSUSI) instruments on board the Defense Meteorological Satellite Program (DMSP) satellites. This formation of currently three operating satellites observes both auroral zones in the far UV from (115--180 nm) with a 3000 km wide swath and 10 x 10 km (nadir) pixel resolution during each orbit. From the N2 LBH emissions, the precipitating electron energies and fluxes are inferred in the range from 2 keV to 20 keV. We use these observed electron energies and fluxes to calculate auroral ionization rates in the lower thermosphere (≈ 90–150 km), which have been validated against ground-based electron density measurements from EISCAT. We present an empirical model of these ionization rates derived for the entire satellite operating time and sorted according to magnetic local time and geomagnetic latitude and longitude. The model is based on geomagnetic and solar flux indices, and a sophisticated noise model is used to account for residual noise correlations. The model will be particularly targeted for use in climate models that include the upper atmosphere, such as the aforementioned WACCM-X or EDITh models. Further applications include the derived conductances in the auroral region, as well as modelling and forecasting E-region disturbances related to Space Weather.
The use of the “G” descriptive letter in the ionogram interpretation is reserved for the condition in which ionospheric F1-layer critical frequency foF1 exceeds the one of the F2-layer (foF2), the latter being the layer typically with maximum electron concentration.
The ionospheric G-condition events observed with Millstone Hill Incoherent Scatter Radar (ISR) on September 11, 12, 13, 2005; June 13, 2005, and July 15, 2012 are studied. A set of the main aeronomic parameters responsible for the formation of daytime mid-latitude F-layer using of the earlier developed method to extract thermospheric parameters from ionospheric observations are used. Thermospheric parameters are retrieved from ionospheric observations using the earlier developed method (Perrone and Mikhailov, JGR, 2018, DOI: 10.1029/2018JA025762).
The method retrives thermospheric parameters, oxygen concentration ([O]), molecular nitrogen ([N2]), molecular oxygen, ([O2]), esospheric temperature (Tex), EUV solar total flux, and vertical plasma drift (W) from ionospheric observations.
To retrieve thermospheric parameters from ionospheric observations observed noontime foF2 and plasma frequencies at 180 km height, f180 are required for (10,11,12,13,14) LT; both may be taken from Millstone Hill Digisonde observations. The method is designed to work with routine ground-based ionosonde observations and it cannot be applied during G-conditions, when F2-layer maximum is not seen. Therefore, the method was changed to deal with the whole Ne(h) profiles available from ISR observations. In addition to five f180 values now we use observed Ne at the upper boundary (normally 450-500 km), and a couple of points on the Ne(h) profile controlling its shape.
CHAllenging Minisatellite Payload (CHAMP)/STAR and Gravity field and steady state Ocean Circulation Explorer GOCE neutral gas density observations were included into the retrieval process. It was found that G-condition days were distinguished by enhanced exospheric temperature and decreased by ~ 2 times of the column atomic oxygen abundance in a comparison to quiet reference days, the molecular nitrogen column abundance being practically unchanged. The inferred upward plasma drift corresponds to strong ~ 90 m/s equatorward thermospheric wind presumably related to strong auroral heating on G-condition days (Perrone et al., Remote Sens., 2021, https://doi.org/10.3390/rs13173440).
The European community is more and more involved in building a common data base to share knowledge on space weather domain. The Space Weather network (SWESNET) develops, manages and distributes high quality scientific observations, results and models of interest for space weather applications. In the frame of this project, we developed a local heliospheric data centre to host two scientific tools and to generate related scientific data products. The Coronal Mass Ejection (CME) propagation prediction tool makes use of coronagraph and in-situ data from L1, to forecast the CME evolution. The magnetic effectiveness tool makes use of in-situ data (both from L1 and planetary missions) to compute magnetic helicity and forecast how the probes are magnetically connected to the solar corona. These data products will be then made available to the SWESNET community. The local heliospheric data centre developed in ALTEC, provides extensive datasets and the possibility of designing, implementing, and validating the algorithms dedicated to space weather forecasting purposes. The data management applied at the data centre, is designed to deal with different data products, data formats and availability, by taking into account the real-time constrains which are essentials to provide forecasting services.
The Krishna basin is the fifth largest river basin in India shared between four states in India, the largest of them is Karnataka. The States have full authority over water resources within their boundaries, good cooperation between these four states over the water resources in the Krishna river basin is essential for good governance. For the basin, southwest monsoon provides most of the rainfall in the period June to October (90% of the yearly rainfall). Agricultural areas cover about 76% of the total surface of the basin. With a growing population (currently more than 66 million), growing demand for food production and the intense water resources development, the basin is under severe environmental pressure. Water Accounting Plus (WA+) framework developed by IHE Delft and its partners, FAO and IWMI is applied to analyse the water resources conditions of three sub-basins of Krishna located in Karnataka state: Middle Krishna (K2), Ghatprabha (K3), and Malprabha (K4). The irrigated areas in the three sub-basins covers about 41% of the geographical area. The analysed period is the hydrological years from 2010-2011 to 2017-2018 and results are provided as spatial monthly and yearly maps, water accounting sheets and indicators (monthly and yearly scale). Inputs for the study are Remote Sensing (RS) global open-access datasets and in-situ measurement provided by Advanced Centre for Integrated Water Resources Management, Government of Karnataka, for validation purposes. This paper describes the Remote Sensing data analysis and data selection, the methodology used for the study, presents the results and provides recommendations for water resources management in the basin including the Irrigation water use. Several RS datasets are available which estimate precipitation (P) and evapotranspiration (ET). In this study the best datasets are selected based on: (a) inter-comparison of products, (b) validation using in-situ measurement, (c) yearly water balance assessment and comparison with in-situ discharge measurements, (d) availability of data in recent years. CHIRPS dataset was selected for precipitation measurements and SSEBop for actual ET estimates. RS-ET data shows a less pronounced month-to-month and seasonal variability than precipitation, with higher ET values in the monsoon season where water and energy are abundant and lower in the winter months. Reservoirs have the highest total ET (up to 1,500 mm/yr) followed by other water bodies and irrigated areas. A large portion of the three sub-basins is covered by fallow land which shows extremely low ET values (100-200 mm/yr). These low values seems unrealistic for this climatic zone where rainfall reaches up to 600 mm/yr. The upstream mountainous areas of the three sub-basins generate most of the runoff (up to about 1,000 mm/yr) while the agricultural areas and the water bodies are net consumer (up to 1,000 mm/yr).
The Krishna basin is an interstate river system that flows through the states Maharashtra (26% of the area), Karnataka (44%), and Andhra Pradesh (30%). Most of the Krishna basin about 76% is covered by agricultural area. Irrigated areas have expanded rapidly in the past 50 years causing a significant decrease in discharge to the sea. The Krishna basin is facing growing challenges in satisfying the growing water demands and conflicts are arising because of competing demands. A detailed water productivity assessment in one of the three irrigation schemes located in the sub-basins, namely Narayanpur Left Bank Canal (NLBC) command area was taken up for the study. The NLBC network which comprises an irrigated area of 451,703 ha as per the official statistics. The study was carried out for the Kharif seasons (July till December/January) in three years from 2017 to 2019. The main crops cultivated include sugarcane, cotton, paddy, sorghum, beans, and maize, among others. Analyses were also done with the aim to look at underlying causes behind specific spatial trends of yield and water productivity and report the findings. Specifically, following steps were undertaken: 1. Analysis on variability of Biomass, yield, and WP in NLBC exploring the possible correlations with rainfall and land use. 2. Analysis on irrigation performance in terms of water availability to distributary canal service area by computing uniformity, water deficit and its impact on crop yield. 3. Analysis on variability of ETa, Relative Water Deficit (RWD) and biomass in NLBC exploring the possible correlations with distance from canals. Ground data was collected using Open Data Kit (ODK). The field data collection was carried out in the NLBC scheme in December 2019 and January 2020. All the Landsat 8 data acquired between 1 January 2017 and 31 December 2019 were processed to estimate ETa and AGBP. The entire NLBC is covered in 3 Landsat tiles. A total of 156 Landsat 8 scenes were processed. Although the target season of the study is Kharif which is from July to December, we processed all the images from 1 January 2017 to 31 December 2019 to provide the continuity in the temporal moving window over months/seasons for the gap-filling step that comes at a later stage. All the spectral bands including the thermal bands were processed in preparation to apply the SEBAL algorithm. The Landsat data preprocessing was performed at a spatial resolution of 30 m, resulting in a total of 7.5 million pixels for each map covering the entire NLBC. A total of 312 maps each with 7.5 million pixels were processed, 2.3 billion pixel-date, to develop seasonal ETa and Biomass maps. We used the Landsat Collection 1 Level-1 data belonging to the Tier 1(T1) inventory. For topography and elevation, 30 m data from NASA’s Shuttle Radar Topography Mission (SRTM), acquired from USGS EROS Data Center was used. All the Landsat data were downloaded from Google cloud public storage. The data acquisition was automated using the gsutil open-source python library. The latest available land use maps for two crop years 2017-18 and 2018-19 were obtained from the National Remote Sensing Centre (NRSC) India. The NRSC map is of 60 m spatial resolution and classifies 17 land use types for the 2017-18 cropping year. This map was used to inform the crop type mapping process in the project. For crop type mapping for the year 2019, the Sentinel 2A/B multi-spectral and Sentinel 1 Synthetic Aperture Radar (SAR) data available from July 2019 to January 2020 were used. Due to the extensive cloud coverage during the period, we could only use atmospherically corrected Sentinel 2 data from November 2020 and January 2021 for the crop type mapping. Further, Sentinel 1 SAR data, a median filter was applied per month to create a monthly time series from July 2019 to January 2020. The entire crop type mapping was implemented in Google Earth Engine (GEE). The crop type mapping was implemented using Machine Learning (ML). The Random Forest (RF) algorithm was applied to the time series of Sentinel 1 and cloud-free Sentinel 2A/B scenes available in the GEE platform. Fieldwork was carried out in December 2019 and January 2020 to take sample points representing different crop types in the study area. In this study, remote sensing techniques were used to map the extent of different crop types in Kharif 2019, estimate and analyse the yield and water productivity of these crops in Kharif 2019, and understand the water use dynamics in NLBC for three Kharif seasons from 2017 to 2019. Based on the newly developed and validated high resolution crop type map of Kharif 2019, cotton was the major crop in the NLBC with an estimated area of 196,278 ha (43% of total NLBC scheme area). Paddy and Red Gram were also extensively cultivated in the NLBC each respectively occupying 173,264 ha (38%) and 82,600 ha (18%) in the study season. In the 2018 Kharif season, there was around 74% increase in fallow area compared to 2017 Kharif season (2017: 98,907 ha / 2018: 171,796 ha). Around 55% of the total NLBC command area was cropped in multiple seasons in 2017 while it dropped to 27% in 2018.
Several studies have demonstrated that the interferometric phase when properly calibrated is related with the soil moisture change in the time span of the two SAR acquisitions. The phase calibration requires the mitigation of the atmospheric phase screen, the removal of the topographic effects and assumes no deformation. Recently, a new approach to estimate the soil moisture change based on the concept of bi-coherence and phase triplets, was proposed (de Zan et al., 2014). The closure phase is the phase that results from the cyclic product of three interferograms. The main advantage on the use of the closure phase instead interferometric phase, is that the closure phases are immune to all simple propagative effects like target displacement, delays in atmospheric propagation and topographic effects.
In this work we investigate the relation between the closure phases of interferometric C-band SAR images and the time varying soil moisture. In particular, we combine three interferograms obtained from three SAR images, of the same area acquired at different times, to derive maps of bi-coherence and phase triplet. A scattering model is used to estimate the time series of soil moisture from the sequence of phase triplet images.
The study area is located in a farm approximately 20 km East of Lisbon, Portugal, close to the Tagus River estuary. A set of five soil moisture sensors was deployed and set to record soil moisture in an hourly basis providing a 5-month long times series of in-situ measurements, between the 3rd Jan, 2019 and 15th May, 2019. The in in-situ measurements were transformed to phases using the scattering model. To simplify the experimental conditions, we selected a flat area to avoid the artefacts due to topography, a bare soil parcel to avoid volumetric surface scattering and that is in agricultural fallow period so there is no change in roughness, because it is well known the high sensibility of C-band to the vegetation growth and to surface roughness change due to plowing, tilling or harrowing.
Two sets of interferograms were computed from 19 C-band Sentinel-1 A/B images, acquired between 08 January 2019 and 5 May 2019, in Interferometric Wide swath (IW) – Single Look Complex (SLC) mode, only in vertical-vertical (VV) polarization using the ascending (track 45) and descending (track 52) passes. Each time series of SAR images were interferometrically processed combining six imagens for each reference image with a temporal baseline from a minimum of six days to a maximum of 30 days. The interferograms were processed according to the following processing chain: a) interferometric stack generation with all possible combination of the images; b) coregistration using precise orbits and external DEM; c) interferogram computation; d) earth curvature and topographic effect removal; e) coherence estimation using a window of 3 in azimuth and 3 in range; interferogram and coherence terrain correction and geocoding to the WGS84 UTM coordinate reference system. The interferograms were multi-looked with a 16x16 pixels window. For each pass, a system of equation with all computed interferograms, was solved to estimate the closure phase (or decorrelation phase). For each pixel, the system has 100 equations and 70 decorrelation phases. The decorrelation phases of the six days temporal baseline interferograms were compared with the in-situ soil moisture changes derived model phases.
The results show that there is a linear correlation between the in-situ derived modelled phases and the closure phases. The correlation coefficient was R2=0.76 and R2=0.86 for the descending and descending passes, respectively. However, a scale effect of the closure phases was found when compared with the derived model phases. The scale is about 10% for both passes, meaning that the estimated closure phases underestimate the soil moisture changes. The results show the high correlation between the closure phases and the model phases, which points out that it is possible to derive soil moisture variations using C-band Sentinel 1 A/B, as long as a scattering model is provided. The most promising result comes from the clear linear correlation found between modelled (derived from in-situ measurements) and observed phase triplets although a saturation effect was found that hinder its use in case of very dry soils.
After the second half of the nineteenth century, robust growth in the world economy including both industrial and agricultural sectors has led to an aggressive production and utilization of agriculture-based chemicals which often induced calamitous effects on the environment. Injudicious use of weedkillers/herbicides add organic pollutants in agricultural soils which have cascading future repercussions.
Weeds, which are defined as undesirable plant species, grow simultaneously along the crops and consume the nutrients which are mandated only for crop consumption. This inhibits the crop growth and harbours threat in terms of viral crop diseases and problematic insects. They are mostly damaging to crop yield if they have advantages over the crop in some or the other way. Mapping and removal of weeds in early stages of weed emergence will not only be relatively less complicated but also less time-and-cost consuming over later stages of the crop where there is high spectral overlap and heterogeneity. Therefore, there’s a need for mechanism which facilitates identification of crops and weeds in early stages so that irrespective of temporal change, crops and/or weeds can be segmented and classified across all phenological stages.
There are various publications dealing in discrimination of crops and weeds in digital images, over parameters of spectra, geometry, texture and height. Although commercially some solutions are oriented at early stages of weed growth, there’s a need for less computationally expensive solutions which can facilitate and augment segmentation. This produces a cascading requirement for tools which should be capable of periodical monitoring of crops for precise intervention for weed removal.
If spraying of weedicides could be limited to just the weed-affected areas, the ecological harm in terms of soil contamination and over-spraying onto crop regions can be reduced. This can be achieved when the weed information in aerial images is supported by geolocation information at millimetre level accuracy. The published assessments showed that crop and weed segmentation through use of machine learning and deep learning algorithms like ResNet, DenseNet, U-Net, DeepCluster DeepLab and VGGNet had limitations in terms of handling of spectral overlap and performance in heterogenous environments.
The current study is focused on facilitating crop segmentation across early stages of cauliflower in a heterogeneous environment where there are varying lighting conditions, different soil moisture levels, occlusion and infestation by weeds. The sensor in use is DJI Phantom IV Pro for capturing true colour images and Micasense RedEdge-M for capturing multi-spectral images of cauliflower crop field at different time intervals. The study area is a cauliflower field located in Department of Agriculture, University of Naples Federico 2, Portici, Italy. The classes of interest in the imagery are weeds and cauliflower leaves, The analysis over these images was done by making use of several band ratios such as Normalized Difference Vegetation Index (NDVI), Modified Soil Adjusted Vegetation Index (MSAVI2), some image super resolution techniques and morphological operations for optimization of segmentation outputs. The preliminary results have crop segmentation masks as the end output. The subsequent step would be automatic annotation. Current methods require annotating images in a manual fashion which is time consuming and tedious. The aim is therefore to facilitate a tool that improves the quality of the image in terms of level of details and crop characters in presence of weeds.
There is also a requirement to understand the spectral overlap among weeds and cauliflower leaves in order to quantify the correlation among both. The final goal is to improve the recognition of weeds and ameliorate the classification process amongst underlying species of weeds. Once the crops are segmented in the early stages of weed growth, these results can be a prerequisite for training and annotations. mapping weed affected areas for mechanized-spraying of herbicides through drones and/or rovers. Not only this will provide a huge relief from ecological perspective but also beneficial for any project from economical point of view.
Agriculture represents a cornerstone of alpine economies that is endangered in a warming and drier climate. In order to support a more resilient agricultural sector in the future, various systems exist to better manage water resources by quantifying irrigation water requirements. These systems usually estimate irrigation water requirements either as a difference between effective rainfall and evapotranspiration, or using water-balance hydrologic models; efficiency parameters are used to take into account water losses in distributing water to the crops. An open issue in such methods, which often rely on static crop coefficients, is how to dynamically keep track of crop evolution through time, e.g., in the form of crop characteristics like mowing and wetting-drying cycles.
In this work, we will discuss one method developed for the quantification of irrigation water requirements in Valle d’Aosta region with EO data. The method is based on a first-guess estimate of evapotranspiration based on Penman Monteith and an EO-based crop coefficient to convert the first guess in actual, crop-specific evapotranspiration. The EO-based crop coefficient is based on a Random Forest method combining site elevation, day of the year, and Sentinel 2 NDVI. Through EO data, this approach can better track dynamic crop characteristics and so provide more reactive estimates of evapotranspiration compared to static crop coefficients.
This method was compared with a traditional approach to the estimation of the crop coefficient based on literature data and showed significant deviations in the evaluation of annual irrigation water requirement. Results by this EO-based approach will also be compared with estimates of irrigation water requirements based on a spatially distributed hydrologic model including irrigation (Irri-Continuum). The method is currently being converted into a prototype for potential large-scale deployment as a real time, operational tool.
Detecting irrigation areas, volumes and timing is a crucial issue to efficiently manage water ressources and control agricultural practices. To that aim, soil moisture remote sensing measurement is undoubtedly a relevant tool to address this issue. Currently, most studies focus on the scale of the agricultural plot to optimise agricultural yields and use high-resolution satellite measurements such as Sentinel 1 and 2. At the same time, an estimate on a continental or global scale of the volumes of water exploited by irrigation is important for monitoring the evolution of groundwater resources and forecasting their evolution over several years. In this case, the use of satellites such as SMOS or SMAP, characterised by a lower spatial resolution, are particularly effective tools for characterising surfaces, volumes and dates of irrigation.
In this study, SMOS and SMAP soil moisture measurements are used in a methodology derived from an adaptation of the PrISM methodology developed in Pellarin et al. (2009, 2020) and Román-Cascón et al. (2017). The PrISM (Precipitation Inferred from Soil Moisture) methodology uses a simple precipitation/soil moisture model to derive a surface soil moisture temporal evolution based on a satellite precipitation product. The assimilation of SMOS/SMAP soil moisture measurements conducts to generate soil moisture maps (every 3h) over Africa, Arabian Peninsula and Middle East. The resulting maps are, by construction, not able to represent flooding or irrigation events. Thus, a comparison of SMOS/SMAP satellite measurements with simulated soil moisture maps allows the automatic detection of areas and time periods of flooding or irrigation of surfaces.
This methodology has recently been tested over the whole of Africa, Arabian Peninsula and Middle East at the spatial resolution of 0.25 and 0.10° using SMOS and SMAP measurements. The methodology allows to identify without ambiguity areas and periods of high irrigation (Morocco, Algerian and Tunisian coast, Kartoum region, Iraq, South Africa), but also very small areas located in the centre of Algeria, Egypt, Saudi Arabia, and in the southern part of South Africa (see attached figure). Of course, the method also highlights areas of natural flooding such as the Niger River Delta in Mali, Lake Chad, the Okavango Delta (Botswana) and Etosha (Namibia).
On the detected areas and periods, it is then possible to estimate the volumes and dates of irrigation by introducing water into the model in order to reproduce the temporal variations of the SMOS and SMAP measurements. In this presentation, we will present the evolution of irrigated areas and volumes over the period 2010-2021 over Africa, the Arabian Peninsula and the Middle East.
In cotton, an optimal balance between vegetative and reproductive growth will lead to high yields and water-use efficiency. An abundance of water and nutrients will result in heavy vegetative growth that promotes boll rot and fruit abscission, making a cotton crop difficult to harvest. Estimating vegetation variables such as crop coefficient (Kc), Leaf Area Index (LAI), and crop height using satellite remote sensing can improve irrigation management and growth inhibitor application to regulate the vegetative growth and optimize the yield. Optical and Synthetic Aperture Radar (SAR) satellite imagery can be a useful data source since they provide synoptic cover at fixed time intervals. Furthermore, they can better capture the spatial variability in the field compared to point measurements. Since clouds limit optical observations at times, the combination with SAR can provide information during cloudy periods. This study utilized optical imagery acquired by Sentinel-2 and SAR imagery acquired by Sentinel-1 over cotton fields in Israel. The Sentinel-2-based vegetation indices that are best suited for cotton monitoring were identified and the most robust Sentinel-2 models for Kc, LAI, and height estimation achieved R2=0.879, RMSE=0.0645 (MERIS Terrestrial Chlorophyll Index, (MTCI)); R2=0.9535, RMSE=0.8 (MTCI); and R2=0.8883, RMSE=10 cm (Enhanced Vegetation Index (EVI)), respectively. Additionally, a model based on the output of the SNAP biophysical processor LAI estimation algorithm was superior to the empirical models of the best-performing vegetation indices (R2=0.9717, RMSE=0.6). The most robust Sentinel-1 models were obtained by applying an innovative local incidence angle normalization method with R2=0.7913, RMSE=0.0925; R2=0.6699, RMSE=2.3; R2=0.6586, RMSE=18 cm for the Kc, LAI, and height estimation, respectively. This work paves the way for future studies to design decision support systems for better irrigation management in cotton, even at the sub-plot level, by monitoring the heterogeneous development of the crop from space and adapting the irrigation accordingly to reach the target development at different stages of the season.
The Mediterranean region (MR) includes the largest semi-enclosed sea on Earth and is an area of both exceptional biodiversity value and intense and increasing human activities. MR has a unique character as it is in a transition zone between temperate, cold mild-latitudes and the tropics with several large-scale atmospheric oscillations/teleconnection patterns. This determines a high temporal variability of climate which causes periods of excess water with widespread floods followed by long drought episodes and heat waves, making the region highly vulnerable to hydrological extremes. Therefore, resolving the water cycle over the MR is central for protecting people and guaranteeing water and food security.
Previous efforts to resolve the water cycle in the MR have mainly used model outputs or reanalysis and in situ data networks. In this context, the European Space Agency (ESA) has supported significant scientific efforts to advance the way we can observe and characterise the Mediterranean water cycle from satellites with Watchful, Irrigation+, and WACMOS-Med projects. For instance, the WACMOS-Med considered several novel techniques to estimate the different components of the Mediterranean water cycle estimated by satellite observations while minimising the residual errors. WACMOS-Med provided a rational assessment of the different limitations of current satellite technology to characterise in a consistent and accurate manner the different components of the water cycle. However, limitations associated to resolution in space and time, accuracies, uncertainty definition and inter-product consistency hinder the practical use of the products for operational application in several domains (e.g., agriculture, water resource management, hydro-climatic extremes and geo-hazards) over the MR.
Here we present a new ESA project “4DMED-Hydrology” which aims at developing an advanced, high-resolution, and consistent reconstruction of the Mediterranean terrestrial water cycle by using the latest developments of Earth Observation (EO) data as those derived from the ESA-Copernicus missions. In particular, by exploiting previous ESA initiatives, 4DMED-Hydrology intends 1) to show how this EO capacity can help to describe the interactions between complex hydrological processes and anthropogenic pressure (often difficult to model) in synergy with model-based approaches; 2) to exploit synergies among EO data to maximize the retrieval of information of the different water cycle components (i.e., precipitation, soil moisture, evaporation, runoff, river discharge) to provide an accurate representation of our environment and advanced fit-for-purpose decision support systems in a changing climate for a more resilient society.
We organize the project in four consequent steps: 1) developing high-resolution (1 km, daily, 2015–2021) EO-based datasets of the different components of the water cycle by capitalizing on Sentinel missions’ capabilities and previous ESA projects; 2) merging these datasets to obtain land water budget closure and providing a consistent high-quality merged dataset; 3) addressing major knowledge gaps in water cycle sciences enhancing our fundamental scientific understanding of the complex processes governing the role of the MR in the Earth and climate system with the water cycle; 4) transferring novel science results into solutions for society via four user-oriented case studies focusing on flood and landslide hazard, drought and water resources management by involving operational agencies, public institutions and economic operators in the MR. 4DMED-Hydrology will focus on four test areas, namely the Po river basin in Italy, the Ebro River basin in Spain, the Hérault River basin in France and the Medjerda River basin in Tunisia, which are representatives of climates, topographic complexity, land use, human activities, and hydrometeorological hazards of the MR. The developed products will be then extended to the entire region. The resulting EO-based products (i.e., experimental datasets, EO products) will be distributed in an Open Science catalogue hosted and operated by ESA.
Soil moisture (SM) is a critical variable in the understanding of the climate-soil-vegetation system. Typically, SSM data can be applied to different disciplines depending on the available spatial scales: while climatological and meteorological studies employ SM data at a global coarse scale and hydrological studies employ SM data at catchment level, administrative and agricultural applications need SM data at field and subfield scale (tens to hundreds of meters).
We propose a new approach to obtain very high spatio-temporal resolution SSM (20 m, every 3 days) product from remotely sensed satellite data only. The method employs a modified version of the Dispatch (Disaggregation based on Physical And Theoretical scale Change) algorithm to disaggregate SMAP Surface Soil Moisture (SSM) product at a 20 m spatial resolution through the use of a sharpened Sentinel-3 Land Surface Temperature (LST) data. It was possible to reach the 20 m spatial resolution thanks to the use of high resolution LST maps from the sharpening of Sentinel-3 1km daily LST and Sentinel-2 20 m reflectance bands, which overcame the limitations linked to the absence in the Sentinel constellation of a thermal sensor with fine resolution.
First, the proposed high resolution SSM product was validated against available in-situ data from two different fields, and second, it was also compared with two coarser SSM products at 1 Km developed with the same disaggregation techniques but using LST from Sentinel-3 and MODIS. From the correlation between in-situ data and all the disaggregated SSM products, a general improvement was found in terms of Pearson's correlation coefficient (R) for the proposed high resolution product with respect to the two products at 1 km.
The improvement was especially noticeable during the summer season, where field-specific irrigation practices were better captured at high resolution: consistently higher values of SSM could be observed during the warmest months for irrigated fields when compared to rainfed fields. The capability of the product to recognize irrigated fields was studied further comparing the distribution of SSM in this new high-resolution product against the information contained in the Catalan Geographic Information System for Agricultural Parcels (SIGPAC) data, which contains information on the presence of irrigation for each field.
Additionally, a sub-field scale analysis was performed using all the in-situ sensors installed in the two locations available. The validation of SSM at sub-field scale showed an improvement in the correlation with in-situ data with respect to lower resolution products.
The lack of consistent observations on the effective use of the water resource in agriculture hindering the full implementation of the Water Framework Directive (WFD).
Although statistical data sources are able to give a picture at the national scale, often they are not exhaustive at regional and local scales. The Italian Ministry of Agriculture has adopted specific actions (Decree 31/07/2015) for monitoring irrigation areas and volumes on a regular basis to improve the compliance to the WFD. For this purpose, Earth Observation data from the ESA Sentinel-2 satellites are representing a very valuable source of information to fill the gap between research and application for the assessment of water uses in agriculture.
For this study the procedures developed in the context of the INCIPIT project for assessing irrigated areas and corresponding volumes will be illustrated. The objective of the INCIPIT project (PRIN MIUR 2017) is to develop a methodological framework for supporting and planning the irrigation water uses at different spatial scales, and under different hydraulic and meteorological conditions in six Italian regions (Apulia, Campania, Emilia Romagna, Lombardia Sardinia and Sicily).
These procedures exploit the full spectral range of Sentinel-2 data, from visible to shortwave infrared, and temporal domain, thanks to the high revisit time of the two satellites, to monitor the development status of crops.
In detail, different machine learning algorithms have been tested (Support Vector Machines, single decision trees (DTs), Random Forests, Boosted DTs, Artificial Neural Networks, and k-Nearest Neighbours) for mapping irrigated and non-irrigated areas from dense temporal series of vegetation indexes [1], and surface water status derived on SWIR bands with the OPTRAM model [2].
The assessment of the irrigation volumes has been carried out by using the IRRISAT© methodology, based on Penman-Monteith equation, properly adapted with canopy parameters namely crop height, Leaf Area Index, and surface albedo [3] also derived from Sentinel-2 data. The results will be presented from case-studies for the irrigation seasons 2019 and 2020 in two Irrigation and Land Reclamation Consortium located in Campania and Sardinia regions, where the accuracy assessment of the proposed procedures has been carried out with ground-truth data related to effective irrigated areas and measured irrigation volumes at field and district scales.
[1] Falanga Bolognesi, S., Pasolli, E., Belfiore, O. R., De Michele, C., & D’Urso, G. (2020). Harmonized Landsat 8 and Sentinel-2 time series data to detect irrigated areas: An application in Southern Italy. Remote Sensing, 12(8), 1275.
[2] Sadeghi, M., Babaeian, E., Tuller, M., & Jones, S. B. (2017). The optical trapezoid model: A novel approach to remote sensing of soil moisture applied to Sentinel-2 and Landsat-8 observations. Remote Sensing of Environment, 198, 52-68.
[3] Vuolo, F., D’Urso, G., De Michele, C., Bianchi, B., Cutting, M. 2015. Satellite-based Irrigation Advisory Services: A common tool for different experiences from Europe to Australia. Agricultural Water Management, 147, 82–95.
Under the expected growth of population and climate projections, a large part of the world’s population is going to face conditions of increasing water scarcity and food insecurity. The higher food demand being a consequence of growing global population requires an increase in food production which can either be met by expanding the area under cultivation or intensifying the use of the already existing agricultural land. At global scale, it is assumed that these options have the potential to fulfil the growing global food needs. However, regions under unsuitable food production conditions (e.g. unsuited climate, soil, and relief) might have to increase their food production beyond a sustainable stage or have to rely on food imports to ensure food security for the population.
Iran is a prominent example for such conditions, since the country has been facing a rapid population growth under unfavorable political conditions leading to the promotion of the paradigm of achieving self-sufficiency in production of the main staples, such as wheat and rice. This development has been accompanied by decade-long embargo policies against the country preventing extensive food imports. Thus, the country has significantly increased local food production during the past 30+ years although large parts of Iran are unsuited or of limited suitability for agricultural purposes. This development has led to a high and ever increasing water demand towards a very unsustainable use of renewable water sources. At present, Iran uses more than 80 % of its total renewable freshwater resources, while 40% is considered being the limit to ensure environmental sustainability.
Major parts of Iran experience very limited water availability. More than 80% of the country are under arid (65%) or semi-arid (25%) climate conditions and 75% of the total precipitation is received during the winter season when it is not needed for the agricultural sector. Mesgaranet al. (2017) rate almost 80% of Iran’s land as (very) poorly suited or unsuited for cropping. Since thousands of years people had to cope with this situation and the Persians once were known for their advanced and sustainable (adapted to local conditions) water management, e.g. building subsurface qanats to efficiently transfer water from the mountains to the adjacent plains and valleys.
However, in the last few decades the rapid socioeconomic development and climatic change towards drier conditions have changed this situation completely. Iran’s population has grown rapidly from approx. 20 million in 1960 to more than 80 million people today, with 70% living in urban areas (27% in the 1950s), which creates a high pressure on regional available water resources. From the 1960s on, Iran started a big modernization project, replacing traditional sustainable irrigation techniques (e.g. qanats) by electric pumps for groundwater exploitation. At the same time, hundreds of dams were built with more to come in the future and large water transfer projects across major drainage divides have been put into place in order to meet the growing water demand of a steadily increasing population.
As a result the agricultural sector is responsible for approx. 90 % of today’s annual water consumption in Iran and thus drives the country’s unsustainable water use. Approximately 50% of the water used for agriculture comes from tapping underground aquifers making Iran one of the top groundwater miners in the world and resulting in a severe decline of groundwater levels throughout the country.
In our research we analyze the interrelations between vegetation growth, land cover dynamics, and natural water availability in Iran with the goal of evaluating and quantifying vegetation growth related to agricultural land use at different scales from country-wide to regional levels. For this purpose, we use globally available EO products (Sentinel-2, MODIS, GRACE(-FO), ESA annual CCI land cover, Copernicus Global Land Service Land Cover 100m) and global scale reanalysis climate model data (ERA5-Land). This work has been carried out in the frame of the SaWaM project (Seasonal water resources management in semi-arid regions: Transfer of regionalized global information to practice) within the GRoW initiative (Global Resource Water, bmbf-grow.de/en) funded by the German Ministry for Education and Research (BMBF).
Vegetation growth, its temporal dynamics and trends are analyzed by satellite remote sensing data of different scales and periods using MODIS time series data for long-term analyses at national scale and Sentinel-2 time series data for regional analyses of higher spatiotemporal detail covering the last 5 years. In this context we have explored and developed multiple approaches aiming at the differentiation between irrigated and rainfed agriculture based on vegetation growth dynamics and meteorological water availability derived from ERA5-Land reanalysis data (i.e. total precipitation, potential evaporation, temperature, and derived aridity index).
Methodological developments have been accompanied by field work in collaboration with our local Iranian partner – the Khuzestan Water and Power Authorities (KWPA) – conducted within the Karun Basin hosting the longest and largest river by discharge in Iran. So far, five dams have been built on the main Karun river alone to generate hydroelectric power and provide flood control. Due to this collaboration with local Iranian partners, we have been able to validate our remote sensing based results and evaluate the applicability of existing global data products at a regional scale focusing on selected areas within the Karun Basin.
At national scale we analyzed the spatiotemporal development of agricultural areas based on remotely-sensed vegetation growth dynamics and dynamics of the hydrological water storage (GRACE(-FO)) and meteorological conditions (ERA5-Land). Despite increasing hydrometeorological water scarcity, Iran has experienced an agricultural expansion of approx. 27,000km² (9%) between 1992 and 2019 and an intensification of cultivation within existing agricultural areas, indicated by significant positive vegetation trends within 28% of the existing croplands (i.e. approx. 48,000km²).
This agricultural intensification is particularly evident in the largely cultivated relatively wetter northwestern basins of Iran under mainly semi-arid conditions, where more than 95% of the observed significant agricultural vegetation trends are positive. Besides these wetter and thus more suitable areas for agricultural use, positive vegetation trends are also evident in the central and southeastern parts of Iran under (hyper-)arid conditions, where limits in natural surface water availability and high evapotranspiration rates hinder or prevent natural vegetation growth unless intense irrigation is put into place. Overall, the results show a substantial agricultural expansion and intensification during the last two decades despite decreasing hydrometeorological water availability and a cultivation of (hyper-)arid land despite its natural unsuitability for vegetation growth.
Besides this main tendency towards intensified agriculture, degrading agriculture (i.e. cropland with negative vegetation trend) could also be observed. In total, 6% of all agricultural areas in Iran are characterized by a significant negative vegetation trend. Moreover, we analyzed the vegetation trends against aridity and irrigation intensity where the latter one is represented by a proxy defined as the probability that the observed vegetation growth requires additional non-meteorological water supply. These results have revealed an increasing share of negative agricultural vegetation trends towards more arid conditions and higher irrigation intensities.
Among intensively irrigated regions, the share of areas characterized by significant negative vegetation trends amounts to approx. 50% and 70% under arid and hyper-arid conditions, respectively. These findings suggest that by now, in these dry regions the unsustainable water use has reached a level of unsustainable cultivation that eventually has been resulting in reduced agricultural intensity or even uncultivated abandoned fields (e.g. in the region around the city of Isfahan). Our results also have shown that in the central basins of Iran, for up to 30% of the agricultural area the vegetation growth dynamic is highly positively correlated with the decreasing total water storage (TWS), indicating that the reduced water availability (i.e. reduced groundwater storage due to irrigation) results in a decreasing cultivation intensity as a long-term consequence.
The obtained results have shown the potential of satellite remote sensing based time series analysis covering large areas for analyzing vegetation growth in combination with meteorological parameters in order to assess the sustainability of agricultural land use related to available water resources under (semi) -arid climatic conditions for the entire Iran. The obtained results are of so far unprecedented spatiotemporal detail allowing subsequent analyses at different spatial and temporal scales as well as their continuation into the future increasingly relying on higher resolution Sentinel-2 data whose global coverage enables transferring the developed approach to other (semi)-arid regions worldwide.
Soil moisture is one of the most relevant geophysical variables that plays an important role in a wide range of applications such as climatology, agricultural practices and drought monitoring. In 2010, it was recognised as an Essential Climate Variable (ECV) by the Global Climate Observing System (GCOS) and its importance has been stressed in several scientific projects and missions. During the years, a major effort has been dedicated to the estimation of the water content in the soil from airborne and satellite SAR (Synthetic Aperture Radar) data, which ensure high spatial resolutions in any sunlight condition and almost any weather condition. A contribution to this effort also derives from the increasing availability of data collected at different frequency bands and polarizations (i.e., dual-pol and full-pol data). The objective of this work is to make a comparison between the performances offered by data acquired at C-band and L-band in order to evaluate their sensitivity to soil moisture in the context of a retrieval process. The analysis is carried out considering a time-series of dual-pol Sentinel-1A (C-band) and full-pol SAOCOM-1A (L-band) acquisitions over an agricultural area located near the city of Monte Buey in the Córdoba Province, Argentina. In particular, this area belongs to the core validation sites of the SMAP (Soil Moisture Active Passive) mission and is a validation site for the SAOCOM soil moisture products generated by the Argentinian Space Agency (CONAE). During the 2019-2020 season a field campaign was conducted by CONAE in this region simultaneously with the SAOCOM-1A acquisitions allowing to obtain important in situ measurements, such as soil moisture content and other variables related to vegetation (plant height, growth stage), for several regions of interest (i.e., crop fields). In this work, SLC (Single Look Complex) quad-pol data collected between October 2019 and February 2020 during the descending SAOCOM-1A overpasses were considered with a revisit time of sixteen days and a resolution of 10m x 6m in ground range and azimuth, respectively. At the same time, a time-series of Sentinel-1A data was also considered with a revisit time of twelve days. In the case of Sentinel-1A, the GRD (Ground Range Detected) data, with a resolution of 10m x 10m in ground range and azimuth, were used. The study was performed considering soil moisture data recorded at different stations belonging to a permanent network since the acquisition days are different from those of the SAOCOM mission and the in-situ data collected during the 2019-2020 field campaign. The temporal evolution of the backscattering coefficient sigma-nought at different polarizations was extracted from the calibrated and geocoded data for the different agricultural fields and it was compared to soil moisture variations. The sigma-nought trend acquired over corn fields has been interpreted also with the aid of the electromagnetic model developed at Tor Vergata University. The latter is a fully polarimetric model that allows separation of the contributions coming from different scattering sources in the vegetation canopy. A comparison of simulated and measured polarimetric will be presented. In addition, the evolution of the backscattering coefficient was also related to other parameters such as the Radar Vegetation Index and the NDVI variations from the Sentinel-2 satellite. The study provides an insight into the possible synergy of a long term stack of C-band and L-band radar data for soil moisture monitoring. It is made possible by the systematic acquisition of SAOCOM-1A and Sentinel-1 in a site well equipped with ground truth data. This is of extreme importance in view of the future possibilities offered by the launch of the NASA-ISRO NISAR mission (L-band) and especially the ESA ROSE-L mission that will operate in a synchronous manner with Sentinel-1.
Water abstractions for irrigation are estimated to account for 70% of total water abstractions on the global scale and the sector has a significant impact on the water cycle. These impacts are rarely taken into account in hydrological modeling studies and the models which do incorporate irrigation processes typically require maps of irrigated areas as input. However, the spatial and temporal mapping of irrigated areas in many places is still incomplete. A number of global maps exist, but these products have a coarse resolution and represent a snapshot in time while irrigated area extent is dynamic across space and time depending on multiple factors such as water availability and weather. For many applications in water and agriculture such as basin-scale modeling, dynamic irrigation maps covering the required area are needed for better estimation of abstracted water.
The main objective of this study is to develop an approach to estimate dynamic irrigated area in a hydrological year or using a remote sensing driven pixel based soil water balance model (PixSWaB, Seyoum et al. in preparation). The model has low data input requirements which are satisfied by remote sensing and global datasets, a limited number of calibration parameters and can be run at any chosen spatial resolution. This newly developed model computes green evapotranspiration (ETgreen) by tracking the amount of water available for ET in the soil from precipitation, and blue evapotranspiration (ETblue) is then obtained from the difference between remotely sensed actual ET and ETgreen. While irrigated areas can be provided as input to the model to adjust a consumed fraction and estimate water supply, the model can be run without this information and produce maps of ETblue. The irrigated area is then derived for any region of interest by applying an unsupervised Dynamic Time Warping (DTW) approach on the time series of monthly ETblue over a hydrological year. As remotely sensed ET is the main driver in the computation of the irrigated areas, the model is run here at 3 different resolutions: at 100m in the Litani River Basin in Jordan with WaPOR Level 2 ET data, at 250m in the Urmia Lake Basin in Iran with WaPOR Level 1 ET data, and at 1km in the Krishna River Basin in India with SSEBop ET data. The advantage of this method over traditional irrigation mapping using multispectral data is that it does not require local training samples and it takes into consideration the varying levels of water depletion between irrigated and non irrigated areas which may not appear when considering only spectral signatures.
Our results are validated using high resolution irrigated area maps derived from Sentinel 1 data. These maps were derived by applying supervised DTW on a time series of backscatter signals by comparing the pixel wise temporal signals to a set of known temporal patterns obtained from irrigated area pixels (training samples) in the region. In addition, the outputs are compared to available maps including the Global Irrigated Area Maps (GIAM) released by International Water management Institute (IWMI) and national statistical estimates and maps where available.
While the irrigation of agricultural field crops land of around 1,560 million hectares worldwide, consuming thereby ~ 70% of the total freshwater, is increasingly focusing on optimized irrigation management, it is largely overlooked that pasture and grass areas cover almost twice the area worldwide with around 3,200 million hectares. Although only small areas are irrigated relative to arable land, this is contrasted with areas, especially in urban areas, which consume large amounts of fresh water and whose water consumption has so far been largely ignored.
This is particularly glaring on the extensive turf areas of commercial airports, where sufficient irrigation plays a vital role. In addition to dry spells induced by climate change, these are exposed to high thermal stress due to hot engine exhaust fumes. If the surfaces become too dry, a high risk of erosion due to whirled up dust can lead to engine damage to the aircraft. On the other hand, wet areas attract birds with the risk of bird strikes, which always poses considerable safety risks to air traffic.
It is therefore necessary to find the balance between sufficient, but not excessive, water supply in irrigation management and to develop suitable methods for this. This typical optimization problem to increase efficiency also focuses on the effects of resource conservation with regard to the consumption of the drinking water used and also on minimizing the replacement of the turf plots destroyed by drought of an expensive special turf developed for airports.
In the ESA-funded TIMM (Turf Irrigation Moisture Monitoring) project, Spatial Business Integration, Germany, has developed an irrigation map that is currently being tested at one of Europe's largest commercial airports, Frankfurt Airport. On the basis of SAR data, soil moisture maps are derived, calibrated with permanently installed terrestrial soil moisture measuring instruments and used with data from the short-term weather forecast as a basis for the creation of an irrigation map until the next satellite overpass. In addition to the roll-out to other commercial airports, the methods developed here can be transferred to urban green areas such as parks or golf courses.
Irrigated agriculture accounts for more than 80% of Saudi Arabia's water demand, consuming more than 20 billion cubic meters of non-renewable groundwater resources each year. The depletion of aquifers threatens water security in many other regions of the world, yet abstraction is often not managed or even monitored.
Earth observation through satellite platforms coupled with models enables mapping of irrigated areas and their associated water use at large scales. For example, object-based image analysis of maximum annual NDVI allows delineating thousands of center-pivot fields (and other orchard or plantation fields) on a semi-automated basis. At the same time, combining this information with weather data and crop water use models enables the estimation of groundwater use from individual fields to regional scales.
While efforts have demonstrated application potential at regional to national scale, adapting existing frameworks towards an operational and fully automated product is a substantial technical challenge. It requires acquiring and processing terabytes of satellite imagery, numerical weather prediction data, and managing all intermediate and final outputs, as well as computing storage and resources. Water management agencies often lack the technical capability and/or expertise to manage such research-level processing frameworks.
Fortunately, interest from the private sector in planetary-scale geospatial data analysis has resulted in the creation of cloud-based platforms dedicated to Earth observation research. One such platform is the Google Earth Engine (GEE), which hosts petabytes of geospatial data, including the collections of images from the Landsat (USGS/NASA) and Sentinel (ESA/Copernicus) missions. More notably, GEE also offers a dedicated massive computing infrastructure to explore and analyze these datasets directly with free access to both the data and computing resources for academic research.
To take full advantage of these resources, it is important to understand the available processing paradigms and their best fit for different mapping applications. For example, some machine learning methods (including classification and clustering) can be processed directly on GEE, while others such as deep-learning still require some “on-premises” resources and therefore data retrieval as well.
In this work, we explored the potential of adapting two important components of an irrigated water use monitoring framework for its direct use in GEE. First, we explored several machine learning approaches for automated mapping of center-pivot fields at national scale. Secondly, we adapted two crop water-use models (PT-JPL and TSEB) for GEE with direct use of Landsat-8 and Sentinel-2 imagery and meteorological data from the European Centre for Medium-Range Weather Forecasts (ECMWF). Importantly, model development was adapted so that the same software can be used both with on-premises data and resources, or directly within GEE. Preliminary tests at a national-scale, fully automated, and directly on GEE show the potential of mapping tens of thousands of center-pivot fields, including previously unmapped remote and/or isolated irrigated fields. Retrieving water use estimation from individual fields is possible entirely within GEE, without the need to download a single satellite image.
These components will form part of a cloud-based operational product with the goal of providing water management agencies a tool to map irrigated agriculture and estimate their water use.
Here, maps of primary production in the coastal waters of the East Sea were generated using sea surface chlorophyll-a concentrations (CHL), photosynthetically available radiation (PAR), euphotic depth induced by GOCI along with sea surface temperature (SST) from satellites of foreign countries as input parameters, and carried out a sensitivity analysis for each parameters. On 25th of July in 2013
when a wide cold waters appeared and on 13th of August in 2013 when a big harmful algal bloom existed in the study area, it shows high productivities with averages 1,012 and 1,945 mg C m-2 d-1, respectively. On August 25, 2013, when the cold waters and red tide retreated, it showed an average of 778 mg C m-2d-1, similar to the results of the previous analysis. As a result of the sensitivity analysis, PAR did not significantly affect the results of the primary production, but the euphotic depth and CHL showed above average sensitivity. In particular, SST had a large influence to the results, thus we could imply that an error in SST could lead to a large error in the primary production. This study showed that GOCI data was available for primary production study, and the accuracy of input parameters might be improved by applying GOCI, which can acquire images 8 times a day, making it more accurate than foreign polar orbit satellites and consequently, it is possible to estimate highly accurately primary production.
The increase of atmospheric CO₂ levels by as much as about 10% since the beginning of 21st century and its impact on the Earth’s climate and the biosphere represent a major concern. A compilation of in-situ data over the global coastal ocean indicates that the world’s coastal shelves absorb about 17% of oceanic CO₂ influx, although these areas represent only 7% of the oceanic surface area (Borges, 2005; Cai, 2011; Laruelle et al., 2010). However, large uncertainties in coastal carbon fluxes in the coastal margins exists due to the under sampling of the coastal ocean in both space and time. Indeed, Satellite remote sensing, in conjunction with in-situ data, allow the collection of various physical and biological parameters at regional and global scales at different temporal resolutions not accessible from other in-situ observation methods.
The main objective of the CO2COAST project (ANR funding) is to estimate the surface-ocean CO₂ partial pressure, pCO₂w, CO₂ flux, and associated uncertainties, from satellite remote sensing over the global coastal waters at high spatial resolution (1kmx1km). Based on these estimations, the respective contribution of the estuaries vs. continental shelves to the CO₂ fluxes will be evaluated over the global coastal waters. The global coastal database used to accomplish this aim is constituted of 11,36 10⁶ in situ data points of pCO₂, (SOCAT database) for which 580 10³ satellite match-up database (from 1997 to 2020) has been built (first at 4 km, and later at 1 km). This match-up database gathers in-situ pCO₂w, and satellite measurements of remote sensing reflectance, Rrs, chlorophyll concentration, Chl, absorption by colored dissolved organic matter, acdom, sea surface salinity, SSS, and temperature, SST. Such a multidimensional dataset requires “intelligent investigation” using machine learning methods (ML) to exploit spatial and temporal complex structures, find patterns, and fuse heterogeneous sources of information efficiently.
In a preliminary study, a ML algorithm has been applied to test two approaches, global and class-based regression, to estimate pCO₂w over global coastal waters. The results were advantaging using the class-based approach. This result highlights that the relationship between physical and biogeochemical factors differ from one class to another while explaining the variability of pCO₂w, connoting regional dependencies. In this same study, the input parameters were Rrs, SST, SSS, and the coordinates (Lat, Lon). However, in a second stage, testing other configurations involving different input parameters (time, Chl, acdom, etc) is essential to evaluate the contribution of these parameters to accurately estimate the pCO₂w. For that, another ML algorithm is applied; 2S-SOM (Yala et al., 2019, El Hourany et al., 2021), a variant algorithm of the Self-organizing maps algorithm (SOM, Kohonen, 1993). Through an unsupervised learning and neural network classification, the dataset is finely clustered while evaluating the weights of each parameter in each cluster. These weights are automatically assigned while minimizing the intra-class variance of each cluster. The application of such ML algorithm will allow to better understand the drivers of the pCO₂w variability and to estimate this latter while using biotic and abiotic parameters such as Rrs, SST, SSS, acdom and Chl.
The methodology under development is instrumental to build a coherent and robust satellite database of coastal pCO₂w at high spatio-temporal resolution (Daily, 1-4 km product, from 1997-2021).
The Particulate Organic Carbon (POC) pool plays a fundamental role in exporting carbon from the surface to the deep ocean through a series of biogeochemical processes known as the ocean biological carbon pump. However, to capture the dynamics of the POC pool and its role in the biologically-mediated part of the ocean carbon cycle, it is essential to capture consistent long-term time series data with adequate spatial resolution suitable for climate studies. The Ocean Colour Climate Change Initiative (OC-CCI) version 5 products (chlorophyll-a concentration, remote-sensing reflectances, and inherent optical properties) provide over two decades of consistent, error characterized, multi-sensor merged satellite data (1997-2020). Here we evaluate eight candidate POC algorithms applied to the OC-CCI data. The tested algorithms included those that had shown relatively good performance in earlier inter-comparison studies, as well as new algorithms that have emerged since then. All candidate algorithms were carefully validated using statistical metrics suggested by the original developers and the largest collection of in situ and satellite match-up data that we have been able to assemble. The algorithm that performed best was then tuned using our compiled match-up dataset to estimate the global POC pool in the mixed layer with high confidence from satellite observations. The relationship between POC and phytoplankton carbon and chlorophyll-a was analysed to further assess the performance of the POC algorithms. The new satellite-derived POC products were then used for time series analysis to investigate trends in the global POC pool.
Phytoplankton in the sunlit layer of the ocean act as the base of the marine food web fueling fisheries, and also regulate key biogeochemical processes such as exporting carbon to the deep ocean. Phytoplankton composition structure varies in ocean biomes and different phytoplankton groups drive differently the marine ecosystem and biogeochemical processes. Because of this, variations in phytoplankton composition influence the entire ocean environment, specifically the ocean energy transfer, the deep ocean carbon export, water quality (and by that also the human health, e.g., when certain species cause harmful algal blooms). As one of the algorithms deriving phytoplankton composition from space borne data, within the framework of the EU Copernicus Marine Service (CMEMS), OLCI-PFT algorithm was developed using multi-spectral satellite data collocated to an extensive in-situ PFT data set based on HPLC pigments and sea surface temperature data (Xi et al. 2020, 2021; https://marine.copernicus.eu/). By using multi-sensor merged products and Sentinel-3 OLCI data, the algorithm provides global chlorophyll a data with per-pixel uncertainty for diatoms, haptophytes, dinoflagellates, chlorophytes and prokaryotic phytoplankton spanning the period from 2002 until today. Due to different lifespans and radiometric characteristics of the ocean color sensors, it is crucial to evaluate the CMEMS PFT products to provide quality-assured data for a consistent long-term monitoring of the phytoplankton community structure. In this study, using in-situ phytoplankton data (HPLC pigments) and hyperspectral optical data collected from expeditions in the trans-Atlantic region, we aim to 1) validate the CMEMS PFT products and investigate the continuity of the PFTs data derived from different satellites, and 2) deliver two-decade consistent PFT products for time series analysis with PFT uncertainty accounted. For the latter we expect to determine variation of the surface phytoplankton community structure targeting different biogeochemical provinces.
References
Xi, H., Losa, S.N., Mangin, A., Garnesson, P., Bretagnon, M., Demaria, J., Soppa, M.A., d’Andon, O.H.F., Bracher, A., 2021. Global chlorophyll a concentrations of phytoplankton functional types with detailed uncertainty assessment using multi-sensor ocean color and sea surface temperature satellite products. Journal of Geophysical Research-Oceans, doi: 10.1029/2020JC017127
Xi, H., Losa, S.N., Mangin, A., Soppa, M.A., Garnesson, P., Demaria, J., Liu, Y., d’Andon, O.H.F., Bracher, A., 2020. A global retrieval algorithm of phytoplankton functional types: Towards the applications to CMEMS GlobColour merged products and OLCI data, Remote Sensing of Environment, doi:10.1016/j.rse.2020.111704
Primary production by marine phytoplankton is one of the largest fluxes of carbon on
our planet. In the past few decades, considerable progress has been made in estimating global
primary production at high spatial and temporal scales by combining in situ measurements of
photosynthesis-irradiance (P-I) parameters with remote-sensing observations of phytoplankton biomass. One of the major challenges in this approach lies in the assignment of the appropriate values for these model parameters that define the photosynthetic response of phytoplankton to the light field. In the present study, a global database of in situ measurements of P-I parameters and a 23-year record of climate-quality satellite observations were used to assess global primary production and its variability with seasons and locations as well as between years. In addition, the sensitivity of the computed primary production to potential changes in the photosynthetic response of phytoplankton cells under changing environmental conditions was investigated. Global annual primary production varied from 48.7 to 52.5 Gt C/yr over the period of 1998-2020. Inter-annual changes in global primary production did not follow a linear trend and regional differences in the magnitude and direction of change in primary production were observed. Trends in primary production followed directly from changes in chlorophyll-a and were related to changes in the physico-chemical conditions of the water column due to inter-annual and multi-decadal climate oscillations. Moreover, the sensitivity analysis in which P-I parameters were adjusted by ±1 standard deviation showed the importance of accurately assigning photosynthetic parameters in global and regional calculations of primary production. The light-saturation parameters of the P-I curve showed strong relationships with environmental variables such as temperature and had a practically one-to-one relationship with the magnitude of change in primary production. In the future, such empirical relationships could potentially be used for a more dynamic assignment of photosynthetic rates in the estimation of global primary production. Relationships between the initial slope of the P-I curve and environmental co-variables were more elusive.
Phytoplankton are responsible for releasing half of the World's oxygen and for removing large amounts of carbon dioxide from the surface waters. Despite many studies on the topic conducted in the past decade, we are still far from a good understanding of ongoing rapid changes in the Arctic Ocean, and how they will affect phytoplankton and the whole ecosystem, mainly because scientific community cannot keep up with the pace of these changes. An example is the difference in Net Primary Production (NPP) modeling estimates, which differ two times globally and fifty times when only the Arctic is considered. There is a difference between Net Primary Production (NPP) and Net Community Production (NCP). NPP shows the growth rates of phytoplankton, while NCP includes NPP and heterotrophic respiration, or the growth rates of the heterotrophs as well. We are studying the relationship between NCP, NPP and various environmental factors. Our hypotheses are: 1) we could obtain more accurate NCP and NPP estimates using regionally developed algorithms based on optical in-situ data, and 2) variability of phytoplankton is closely linked to water stratification in Atlantic Waters and dissolved organic matter influenced by river runoff in the East Greenland Current. We use the in-situ data to validate the satellite Greenland Sea parameterization of global primary production model, modernise the input empirical parametrisations, and analyse the factors influencing primary production patterns in the region. We use the in-situ data from the Institute of Oceanology of Polish Academy of Sciences expeditions to the Greenland Sea each year since 2013, together with the North Polar Institute Fram Strait expedition in 2021, representing the large dataset of bio-optical data with parts of it yet unpublished. Besides, we use satellite GlobColour chlorophyll, photosynthetically active radiation and AVHRR sea surface temperature. Resulted regional NCP and NPP models can be further used for system modelling of the dynamics of Arctic Ocean ecosystem and be one of the components in the ecosystem-based management of the region.
Dramatically changing climate in the Arctic is changing the hydrology and the biogeochemistry of rivers. Vice versa, river water can be a powerful indicator for the impact of climate change since river biogeochemistry and discharge integrate upstream terrestrial and aquatic environmental processes over a defined watershed. In Arctic catchments, permafrost is warming and thawing, releasing much organic carbon that was previously frozen, thus inactive in the carbon cycle. Long-term global Climate Change Initiative (CCI) remote sensing based datasets are a powerful tool to observe changes across terrestrial, coastal and marine environments in high frequency and throughout long time series.
In this study, we show the potential of CCI Ocean Color data to retrieve the optical properties of the dissolved organic matter (CDOM) and relate them to riverine organic carbon fluxes on a pan-Arctic scale for the last 23 years. Further, we relate riverine discharge and terrestrial CCI and ERA5 Essential Climate Variables (ECVs) to environmental processes that drive the seasonal and interannual variability as well as long-term trends.
Arctic river water is optically dominated by coloured dissolved organic matter allowing rather simple band ratio retrievals a better performance compared to complex retrieval algorithms (e.g. semi-analytical & neural networks). Here, we use a ratio between 665nm and 512 nm and calibrate this with an extensive in situ dataset.
The results show that extraction of CCI Ocean Color band ratio in the fluvial-marine transition zones, in combination with known relationships between optical properties of organic matter and its concentrations, provides excellent estimates for dissolved organic carbon (DOC) when compared to in situ data. The seasonal and interannual variability in DOC export by Arctic rivers is dominantly driven by large-scale precipitation anomalies within the river catchments. CCI Permafrost ECVs such as permafrost extent, active layer thickness and ground temperature show alarming trends of thaw for 1997 to 2018, whose influence on the long-term export of OC from land to the Arctic Ocean is unexplored so far.
Continuing degradation of permafrost in the catchments in conjunction with projected increases in river discharge will have an impact on the global carbon cycle, aquatic ecosystems and the global climate that is currently difficult to assess due to the lack of time series and process studies. Long-term global ECVs constitute a cornerstone of such assessments and will help quantifying impacts of thawing terrestrial permafrost on aquatic ecosystems and the Arctic Ocean in particular.
Culture studies have repeatedly demonstrated that the parameters describing the photosynthetic response of marine phytoplankton can vary widely under different growth conditions (light, nutrients and temperature) and between species. Yet several remote sensing estimates of marine primary production either assign a single set of parameters for a given region/season or use global empirical relationships (e.g. the maximum photosynthetic rate as a function of sea-surface temperature). Our inability to develop a more mechanistic approach to parameter assignment is due to both an uneven distribution of experimental observations in both space and time and a lack of information on phytoplankton community structure and/or environmental conditions at the time the experiments were made.
One of the aims of the ESA BICEP project is to expand existing global datasets of the photosynthesis-irradiance (PE) parameters. This data mining effort has dramatically improved both the spatial and the temporal coverage of these parameters that are critical to convert maps of surface chlorophyll to estimates of water-column primary production. We have used the > 10,000 experimental measurements and metadata assembled as part of the BICEP project to explore how changes in environmental forcing and the taxonomic structure of phytoplankton communities are related to variability in the PE parameters. Here we focus on ‘regions of interest’ that cover the four ocean biomes defined by Longhurst. These ocean biomes (Coastal, Polar, Trades and Westerlies) represent the primary unit of biogeographic division of the global ocean and provide a useful way of examining difference in variability caused by large-scale changes in environmental forcing. Our dataset reveals biome-specific differences in the relationship between taxonomic composition and phytoplankton photophysiology. By combining flow cytometric counts and HPLC pigment data in the Trades Biome, we show how variation in photoacclimatory status (intracellular pigment concentration and relative concentration of photoprotective pigments) is strongly related to photosynthetic performance. The patterns of variability observed in this study can be used to improve our assignment of PE parameters for satellite-based studies of ocean primary production.
The relevance of the ocean in the global uptake of carbon dioxide has been proved; despite its importance, the carbon cycle is not fully understood, because of its complexity, and it is not clear how climate change can impact on the cycle itself and its efficiency in absorbing
atmospheric carbon.
In the Mediterranean Sea few studies have been carried out, despite its importance as “laboratory basin” and climate change indicator.
The aim of the study is to present an observational platform in the central Mediterranean, the Lampedusa Oceanographic Observatory, which started to provide a large dataset of parameters relevant for the investigation of the carbon exchange between atmosphere and ocean. The available dataset will be used to verify and constrain satellite estimates of pCO2 and CO2 fluxes in the central Mediterranean. whose measurements will be used as a reference for the verification of satellite estimates of the carbon dioxide partial pressure (pCO2 ).
The Lampedusa Oceanic Observatory (OO) is located in open sea in the Southern sector of the central Mediterranean. The buoy, setup in 2015, is located at 35.49°N, 12.47°E, about 3.3 mi South West of the island of Lampedusa, in an oligotrophic area of the Mediterranean.
The closest continental region is Africa, with the Tunisian coast more than 100 km west of the buoy. The buoy is equipped with many above water and immersed sensors for characterization of the radiation regime, meteorology, and oceanic properties.
Starting from October 2021, the following measurements are operational at 5 m depth: CO2 partial pressure; pH, Chlorophyll, CDOM and backscatter; temperature, salinity and dissolved oxygen; downwelling photosynthetic radiation. These measurements complement
additional observations of multi-band down-and upwelling radiation, and temperature, salinity, at various depths down to 43 m.. Additional measurements (meteorological parameters, downwelling solar, infrared, and photosynthetic radiation) are carried out above water (see, e.g di Sarra et al., 2019; Marullo et al., 2021).
The Oceanographic Observatory complements the Atmospheric Observatory (AO, 35.52°N, 12.63°E; Ciardini et al., 2016), setup on the island of Lampedusa in 1997 and dedicated to climate studies. The distance between the two observatories is about 15 km. A wide set of additional parameters related to climate, including aerosol optical depth and chemical composition, cloud properties, deposition, meteorology, greenhouse gases, aerosols, radiation etc.) are monitored at AO.
The first step of the analysis is dedicated to the comparison between in situ and satellite available datasets (mainly temperature, salinity, photosynthetic radiation, chlorophyll).Data from other sites belonging to the Integrated Carbon Observation System (ICOS) research Infrastructure in the Mediterranean will be also used, with the aim to characterize spatial and temporal variation of the in-situ satellite correlations.
The active tectonics in the Tell Atlas of Algeria with thrust earthquakes (Mw>=6) show a significant surface deformation that results from the oblique convergence of the African and Eurasian plates. The aim of our study is to highlight the visible deformation at the surface in terms of coastal uplift and shortening using InSAR technique and time-series analysis.
Over more than 70 km of the active zone of zemmouri affected by the earthquake of 05/21/2003 Mw6.8, where the mainshock epicenter at latitude 36,83N and longitude 3,65E at 10 km depth, having reached 0.75 m in some places [Bagdi et al, 2021]. The expression of the coseismic uplift of zemmouri 2003, have been well described in previous studies, measuring an average of 0.55 m along the coastal zone combined to the geodetic [Meghraoui et al. 2004], and InSAR measurements [Belabbes et al. 2009], although the postseismic deformation following the 2003 Zemmouri earthquake was documented from 2003 to 2010 with Envisat images [çetin et al. 2015] reaching around 3.5 mm/yr LOS displacement using SBAS technique.
Moreover our post seismic displacement study focuses on measuring surface displacement by PSInSAR technique using Sentinel satellites (A/B) from 2016 to 2021. Both horizontal and vertical (subsidence) displacements associated to the postseismic deformation are presented, showing a surface velocity displacement ranging from -5 to +5 mm/yr , and can be correlated to the postseismic tectonic activity affecting the Tell Atlas.
Aquatic plastic litter is a global problem with several dimensions. That’s why the ESA Eyes on Plastic activity provides a service solution that combines multiple technical components into a joint mapping and monitoring solution for plastic in the aquatic environment. By flying high with satellites and diving deep with Remotely Operated Underwater Vehicles (ROV), plastic litter can be monitored at all angles and scales. Based on a thorough requirements analysis with key players in the field the best methods for the satellite and camera based operational analytics were set up and their feasibility tested. Several key challenges are mentioned by all users such as effective and continuous mapping and monitoring of plastic hot spots, and the requirement for a globally applicable solution that provides detailed information. The focus is on rivers as they play a major role for the input of marine litter. Quantitative measures are required that can be compared over time and across sensors.
The proposed service is a holistic mapping approach, which includes different technologies, allowing to respond to the challenges and map plastic in different aquatic environments and varying level or detail and frequency around the globe. This includes the creation of so-called baseline assessments at discrete times, the identification of plastic accumulation hotspots and the continuous monitoring of places-of-interest like river estuaries using Earth observation methods.
We make use of Satellite Earth Observation, namely the Sentinel-2 data, to monitor floating aquatic plastic litter that is automatically classified according to its spectral characteristics. The supervised classifier Classification and Regression Trees (CART) was trained with freely available ground truth data. Then, its performance was assessed and the outcomes were compared with areas where debris is frequently present. This is the case for the Guanabara Bay near Rio de Janeiro in Brazil, where eco barriers are placed in river estuaries. Furthermore, the Marine Litter Project 2021 placed plastic and wooden targets into the water in the Aegean Sea that served as a reference. The classifier predicted plastic litter with a 73% probability and works particularly well in areas with low turbidity and in clear water conditions. Some problems arise due to white wash, which can be falsely detected as plastic litter. Further studies in Guanabara Bay will provide more ground truth data and thus improved classification results, amongst others by images taken from boats that are taken at the same time as the Sentinel-2 overpass. For the camera-based analytics AI has been proven to work well. Different camera systems can be used depending on the application and local setting. In Indonesia, a fixed CCTV camera that is mounted on a pole continuously monitors one of the main tributaries to the Jakarta Bay. In other cases, like in Brazil or local lakes in Bavaria normal mobile phones or sport camera data is analysed. We are using a state-of-the-art convolutional neural network and created our own dataset of 1000 images of floating debris taken in the Pilsensee, Bavaria. As the plastic litter problem is not limited to the water surface the Eyes on Plastic approach also includes underwater camera data analytics. Similarly to our camera systems above water, we deploy an embedded optical object detection system onboard an ROV. This way we can survey the water column to monitor plastic litter concentration on the decimetre scale. The low power consumption opens up a perspective on replacing the ROV with autonomous underwater vehicles (AUV). The analytics results are available via automatic online web application in a standardized reporting output to support local and international monitoring obligations. This allows the users, third parties or even the broader public to access, visualize and analyse all the data which are being measured by the different techniques, in real-time. Eyes on Plastic combines space assets, on-site sensors and platforms as well as novel IT and algorithms to make the difference in understanding and quantifying aquatic plastic litter.
Microplastic pollution is a widely acknowledged threat to ocean ecosystems. However, the global extent and dynamics of this problem are not well monitored or known. Net trawling methods are invaluable for in situ microplastic concentration data collection, but limitations of cost, spatial coverage, and time resolution leave a gap in global microplastic monitoring. Spaceborne imaging of microplastics could address the spatial and temporal sampling limitations, but reliable microplastic detection from space is problematic. A new approach to the detection and imaging of microplastics from space is presented here. Spaceborne radar measurements of ocean surface roughness are used to infer the reduction in responsiveness to wind-driven roughening caused by the presence of surfactant tracers of the microplastics. The physical relationship between the presence of surfactants and the suppression of ocean surface roughening caused by winds has been investigated via a series of controlled wave tank experiments. Varying concentrations of surfactants are introduced onto the water surface, near-surface winds are generated in a controlled manner with variable speeds, and the surface roughness is measured directly using precision ultrasonic surface height detectors. The changes in surface roughness statistics are then used to derive corresponding variations in the radar scattering cross section that would be detected by a spaceborne radar. The results are found to be consistent with the empirical relationship found from the satellite measurements.
Using the satellite observations of roughness suppression on a global scale averaged over a full year, the reduction in roughening is found to correlate strongly with the number density of microplastics near the surface as predicted by several well regarded ocean circulation models. On a global scale over shorter (monthly) time scales, time lapse images derived from the satellite radar observations reveal seasonal changes in the microplastic mass density within the major ocean basin gyres which appear to be related to seasonal ocean circulation patterns. Other dynamic variations in the concentration are also evident which appear to be related to monsoonal precipitation and ocean circulation patterns. On smaller spatial and temporal scales, weekly time lapse images near the mouths of major rivers reveal episodic bursts of microplastic outflow from the river into the sea.
An overview and the current status of our work using spaceborne radar to detect and image ocean microplastic dynamics, and of our attendant wave tank experiments, will be presented.
Accumulations of garbage (e.g. macroplastic) floating on the water surface in coastal and inland waters is an acute problem in many parts of the world. However, there are many different natural phenomena that can also produce surface accumulations. These include cyanobacterial scum, pollen, foam, seagrass leaves, fragments of plants and macroalgae (e.g sargassum). There are also accumulations of material that can be classified as garbage e.g. timber, plastic or other floating material. Our ability to map and recognise the floating material with remote sensing depends on the spatial and spectral resolution of the sensor used and its revisit time. It was demonstrated nearly two decades ago with Hyperion imagery that 30 m spatial resolution is not sufficient for detection of surface accumulations (in that case cyanobacterial scum) if most of the pixel is not covered with the material floating on the water surface. There are large areas in cyanobacterial blooms that can be detected with such medium resolution sensors. However, in many cases, cyanobacterial scum is in narrow filaments formed by currents and wind, and in that case, the spectral signature is similar to subsurface bloom not floating material. The same happens with all floating material if the spatial resolution of the used sensor is not smaller than the width of the filaments. Hyperspectral sensors with very high spatial resolution on aircraft and drones allow better detection and recognition of the floating material. However, the area that can be covered with airborne or drone measurements is very small, the cost per unit of area is very high and the revisit time, if any, does not permit real monitoring of the surface accumulations. Therefore, Sentinel-2 with its 10 m spatial resolution and revisit time of 2-5 days is basically the only sensor that provides frequent and free data for monitoring of different surface accumulations.
The extent and duration of blooms, especially potentially harmful blooms of cyanobacteria, are very important information for monitoring and managing coastal and inland environments. Chlorophyll-a (Chl-a) is usually used as a proxy of phytoplankton biomass. However, many of the standard Chl-a products, like the one provided by the Copernicus Marine Environment Monitoring Service, do not provide sufficient accuracy in optically complex waters (like the Baltic Sea) to allow such analysis. Moreover, Chl-a algorithms allow the mapping of biomass in the water column, but not when the biomass is floating on the water surface. However, it is important to know where the surface accumulations of cyanobacteria are as they may be up to several centimetres thick and contain high amount of biomass. Therefore, monitoring the presence/absence of material floating on the water surface is an important task in coastal water monitoring. Cyanobacterial blooms usually last from the end of June to September and may cover most of the Baltic Sea. However, there are also other materials that can create a “blanket” on the water surface or form narrow filamentous features. During certain periods water may be covered by pollen of different trees, e.g. scots pine (Pinus sylvestris), and it is critical to understand whether the material floating on the water surface is cyanobacteria or pollen or something else as they may occur in the same time frame. We aimed to test to what extent we can separate the pollen from cyanobacterial accumulations on the water surface by using Sentinel-2 atmospherically corrected (C2RCC, C2X, C2X-Complex, Polymer, IDA) and top of atmosphere data. Some band ratio algorithms were also tested. Unfortunately, atmospheric corrections often fail or give insignificant results in the case of strong surface accumulations. The reason is that the neural net type processors work only in the conditions they are trained for. Meaning that they work in the trained range of Chl-a, coloured dissolved organic matter and total suspended matter in water, but cannot cope with any material floating on the water surface. Therefore, it is not recommended to use these atmospheric corrections that remove (or mask) floating material signal, which is mostly in the NIR part of the spectrum.
We have some in situ data from both pollen and cyanobacteria dominated waters and show that it is possible to separate harmful accumulations of cyanobacteria from unharmful accumulations of pollen based on the spectral data of the top of atmosphere using some band ratio algorithms. It needs further testing to evaluate whether other types of surface accumulations (foam, seagrass, macroalgae, timber, plastic, etc.) can be recognised on the water surface using Sentinel-2 imagery. At present we do not have in situ data from other types of surface filaments, but we are planning further sampling campaigns and experiments to assess the potential of recognising them.
Monitoring of Large Plastic Accumulation Near Dams Using Sentinel-1 Polarimetric SAR Data
Morgan Simpson1, Armando Marino1, Peter de Maagt2, Erio Gandini2, Peter Hunter1, Evangelos Spyrakos1, Andrew Tyler1
1The University of Stirling, United Kingdom; 2ESA ESTEC, The Netherlands
1. Introduction: Plastics in the river environment are of major concern due to their potential transport into the oceans, their persistence in aquatic environments and their impacts on human and marine health. It has also been seen that plastic concentrations in riparian environments are higher following major rain events, where plastic can be moved through surface runoff. Dams are known to trap sediments as well as pollutants such as metals and PCBs [1]. Recently, reports of plastic islands accumulating by dams following heavy rainfall have been reported in Balkan Countries. The use of optical data and Synthetic Aperture Radar have both been utilized in the monitoring of chlorophyll-a, Total Suspended Matter (TSM), landslide monitoring and water volumes in reservoir contexts. This study shows results of the ability to detect and monitor these accumulated plastic islands using dual-polarimetric SAR.
2. Methodology: This study focusses on 2 river systems in Serbia and Bosnia where we have validation photographs for the presence and extent of plastic near 2 dams. It has to be said that the patches are mostly composed by plastic, however we also find a sparce presence of wood and other floating materials. In this study we used dual-polarimetric SLC Sentinel-1 SAR data, provided by the European Space Agency (ESA) through the Copernicus Programme. Optical images from Sentinel-2 were acquired, however, cloud cover was 90+% in all images near the date of plastic build-up and therefore they could not be used. Inspecting Sentinel-1 images over the 2 dams for multiple dates we observed that there was a clear and significant backscatter difference near the dams before and after the date of plastic accumulation. To test the detectability of such patches, we initially performed a data analysis visualizing histograms and extracting synthetic statistics for pixels belonging to the plastic patch and clean water. Following this we exploited a range of single-pol and dual-pol detectors. Specifically, we tested a) simple thresholds on VV and HV intensities, change detection using b) single intensities, c) optimisation of power ratio [2], optimisation of power difference [3-4] and Hotelling-Lawley trace [5]. We used Receiver Operating Characteristic (ROC) curves to assess the performance of each detector.
Finally, a time-series analysis was conducted to analyse the occurrence rates of plastic accumulations at the locations mentioned above. From this, a heat map of the plastic accumulations is created to highlight the locations near the dam where marine debris was most commonly accumulating.
3. Results: Histograms of the pixel intensity values from dates of clean and polluted water showed clear differences with expected separability. The ROC shows that the optimisation of power difference provides the best performance, capable of achieving 85% positive detection ratings with a 0.1% false alarm rate. The improvement consequence of this polarimetric optimization is very significant considering that detectors using only VV achieve on average below 50% detection at 0.1% false alarms. Temporal maps show areas where plastic was commonly accumulating and the size of the patches.
4. Conclusions: This study shows the feasibility of detecting large accumulation of plastic near dams. Additionally, it is evident that the use of a single VV polarization is inadequate for this task and PolSAR data are needed. It has to be kept in mind that the accumulation contains also smaller amounts of other floating materials like wood. Hot maps of areas were plastic accumulate the most are useful to plan future interventions. Further studies should be carried out to evaluate the quad-pol behaviour of these patches to understand if some estimation of density is also possible.
Acknowledgement: This work was supported by the Discovery Element of the European Space Agency’s Basic Activities (ESA Contract No. 4000132548/20/NL/MH/hm). Sentinel-1 data were provided courtesy of ESA.
References: [1] Kondolf, G.M., Gao, Y., Annandale, G.W., Morris, G.L., Jiang, E., Zhang, J. et al. (2014). Sustainable Sediment Management in Reservoirs and Regulated Rivers: Experiences from Five Countries. Earth’s Future, Vol 2 (5), pp. 256 – 280.
[2] Armando Marino & Irena Hajnsek, A Change Detector Based on an Optimization with Polarimetric SAR imagery, IEEE Transactions on Geosciences and Remote Sensing, 52(8), 4781-4798, 2014.
[3] Marino, A. & Alonso-Gonzalez, A. (2018). Optimisations for Different Change Models with Polarimetric SAR. EUSAR 2018, 12th European Conference on Synthetic Aperture Radar, Aechen, Germany.
[4] Emanuele Ferrentino, Armando Marino, Ferdinando Nunziata, Maurizio Migliaccio, A dual polarimetric approach to earthquake damage assessment, International Journal of Remote Sensing, 2019
[5] Akbari, V., Anfinsen, S.N., Doulgeris, A.P., Eltoft, T. (2016). A Change Detector for Polarimetric SAR Data Based on the Relaxed Wishart Distribution. 2015 IEEE International Geoscience and Remote Sensing Symposium.
Ocean monitoring is a vast scientific and commercial subject, linked to essential anthropological aspects. First of all, the ocean is a reservoir of biodiversity that remains fragile, and, despite years of studies and numerous missions, we still lack global and precise understanding. It is also a reservoir of resources and a place of economic activities that must be protected from direct pollution (degassing, oil leaks) and indirect (algae blooms). The detection of these pollutions could help to prevent socioeconomics and health issues.
In this context, the needs for ocean monitoring from satellite imaging platforms are emerging. This work aims at proposing a processing chain to perform this monitoring task. Since oceans cover immense areas, searching for litter, debris or any object-of-interest can be seen as an anomaly detection task where the aim is to detect outliers over a vast water area. The proposed approach is divided into several stages starting from basic and fast processing to more complex methods. The underlying idea is to gradually refine the anomaly detection. In an operational context, the first stages could then be performed on board of a satellite, so that only suspicious images are sent to ground.
The first step uses basic radiometric and textural indices to eliminate large areas of water that are easily identified as void of any anomalies. At this point of the processing chain, the slightest ambiguity must be preserved and dealt with in the downstream stages.
In the case of optical images, a second step eliminates cloudy areas using a segmentation deep neural network (DNN). Since it is not necessary to have VHR images to identify clouds, this step can be performed at a degraded spatial resolution, allowing a faster processing of the data.
The final step performs the actual anomaly detection on the remaining areas. Since anomalies are by nature scarce, it is very difficult to gather a database of annotated anomalies large enough to train a DNN. For this reason, the last stage relies on an unsupervised deep learning approach. An autoencoder is trained to compress and reconstruct images of water areas without anomalies. Then, to assess the presence of anomalies, the image is compressed and uncompressed. The reconstruction error (difference between the input and the output of the autoencoder) is then computed. Since the autoencoder is trained to perform well only with a normal water area, a high error indicates the presence of an unusual pattern, potentially an anomaly. After this process, a major part of the data is eliminated, and a last step can be performed to sort the potential anomalies. This step could be done using a clustering method or a semi-supervised method such as a supervised contrastive learning.
The proposed method is tested on two different modalities: very high-resolution optical images from Pleiades satellite at 70cm ground resolution and radar images from Sentinel-1 at 10m resolution. The radar data allow the detections of suspicious ships, floating objects and some effluent pollution such as oil. In the optical domain, metric or submetric resolutions are desirable in order to detect vessels (for example to fight against illegal fishing, degassing) or unidentified floating objects (trees, container, drifting beacons, algae, icebergs). To assess the quality of the results, a subpart of the dataset was analyzed by expert photointerpreters and the output of the approach is compared to their annotations.
Year by year, rivers transport a significant number of macro litter or plastic debris towards the sea, which poses a socioeconomic and health risk. Though, sources, pathways, and sinks of macro plastic debris are not fully understood. To-date, a commonly used method to gather such kind of knowledge is the visual counting method, which lacks automation.
Within the project “Quantification of plastic load in the Rhine river”, a method is under development and tested that can contribute to a more effective and automated, as well as more human resource and time-saving approach. Main aim is the assessment of arising quantities and exploration of pathways of macro litter occurring in the Rhine river. The developed sensor system consists of two synchronized sensors: a hyperspectral sensor operating from 350 to 1000 nm, as well as a high-resolution RGB camera. With this sensor system, imagery data will be collected from bridges in the German part of the Rhine river under natural daylight conditions with an adaptable image capture rate up to one image per second. Theoretically, this system could be utilized from several bridges or elevated locations over water (e.g. lakes, rivers) in Europe and gather data in a similar way as visual counting today, but with a generalized and highly automatic computer vision and deep learning approach.
The imagery data recorded forms the basis for training a convolutional neural network (CNN), which aims at predicting several macro litter item categories, as well as potential false positives as floating vegetation or foam. The categories are mainly selected from the “Joint list of litter categories for marine macrolitter monitoring” by the Publications Office of the European Union (2021), which is frequently used in macro plastic or litter monitoring across Europe. The Deep Learning-based analysis is supported by an in-situ data collection at the Rhine river, which will serve as validation data for the CNN. In future, this approach will allow insights into temporal changes of abundance of various categories of plastics and foster better understanding of the amount and composition of macro plastic litter floating on the surface of the Rhine river – or other waterways.
Monitoring areas closer to plastic marine litter sources such as rivers and estuarine systems is crucial for increasing our understanding of the transportation dynamics and has the potential to improve pollution mitigation strategies. Currently, the scientific knowledge about the source, amount and spatial variability of macro- and microplastic debris in aquatic ecosystems is still limited. In-situ litter point data is an important source of information; however, the collection is costly, labor intensive and only feasible on a small scale. Our central concept is therefore to upscale in-situ data with earth observation (EO) and hydrodynamic models in the coastal wetlands and coastal waters under influence from the Po River, Italy.
The goal of the first project phase is to set up a data baseline. Multi-type in-situ data was therefore collected at various points along the pollution pathway. High-resolution monitoring via drone imagery taken along the shoreline is established for accumulation analyses. Water samples from the river, its estuaries and coastal areas using manta trawls are used to quantify plastic litter abundances. Imagery taken from different types of camera systems installed on bridges or other infrastructure is analyzed using Deep Learning approaches in order to automatically detect floating plastic in rivers for continuous long-term observation of river surfaces. This provides improved inputs to transport models.
Spectral investigation of water surface reflection characteristics by both spectroradiometers and satellites are necessary to scale single point in-situ data to large-area detection of essential water quality variables. High spatial resolution Copernicus satellite data (Sentinel-2 and -3) provide an excellent option to cover both river and coastal systems under influence from the river plume. Therefore, we processed Sentinel-2 and -3 data to extract the total suspended matter and sea surface temperature products in order to enable the detection of the river plume shape. The satellite data is further used to explore the detection of floating macroplastic, which can be identified in spectral measurements in the SWIR by the chemical composition of plastic polymers. This allows the estimation over a large area of plastic exposure along a coastline via the proxy relationship to more easily detected water parameters.
In the continuation of the project, numerical models aided with in-situ and remote observations will be implemented. They present a powerful tool to study the dispersion pathways, identify potential sources and highlight areas potentially at risk of impacts due to floating marine litter.
While we will give an outlook on the model development phase, we would like to focus our presentation on the results of the in-situ survey and the analysis of the collected data. The drone imagery recorded along coastal beaches and within river branches is used to directly detect and quantify macroplastic at the survey locations and thereby identify areas prone to plastic accumulation. This will allow us to present up-to-date maps of macroplastic abundances. Using a manta trawl we were able to collect water samples from the Adriatic, the coastal areas and estuaries of the Po River as well as, for the first time, from within the river. The microplastic abundances currently measured in the laboratory will not only enable us to give a current update on the state of the waterbody but also help to complete our understanding of the distribution dynamics. The in-situ survey showed a high spatial and temporal variability of plastic abundances in the Po River, which highlights the importance of a continuous monitoring system. The numbers of floating plastic pieces in the river per hour obtained by visual counting ranged from around 100 to more than 600 in three river branches as well as the main river. Using different camera systems, we built a training database of more than 2.000 images that is heterogeneous in terms of resolution, illumination, level of disturbance, and the types of river plastic. Deep Learning models such as Faster-RCNN and YOLO are currently optimized to evaluate their detection capabilities in general and across the different types of imagery. The results will contribute to the advancement of automated methods for monitoring plastic pollution.
The integration of these types of in-situ data with multi-scale EO and hydrodynamic modelling serves as the development basis for a spatio-temporal monitoring system of plastic debris in aquatic ecosystems, allowing an end-to-end depiction of real-world debris transport pathways for the first time. Our goal is to contribute to the construction and advancement of such source-to-sink monitoring systems. These would be able to provide precise up-to-date maps of actual and future plastic debris in riverine and coastal areas and aid the identification of environmental, economic, human health and safety-related impacts of plastic litter and would support targeted efforts of both off- and onshore-based clean-up projects by focusing on smaller areas with higher plastic abundances.
Annually, an estimated 4.8 to 12.7 million tonnes of plastic debris end up in the sea. As it only degrades very slowly, amount of plastic in the sea and on beaches worldwide is gradually increasing to ever more alarming levels. Plastic litter has a major negative impact on marine life and can lead to global economic losses. There is a direct impact on fishery, aquaculture, agriculture, energy and shipping sectors through blockage or damage of infrastructure, such as drains, pipes, cages, gear and ships.
Given the seriousness of the problem, its extent is surprisingly poorly known and quantified. Apart from some very broad numbers and the knowledge of some extremely vast garbage patches identified in the open ocean, it is generally believed that there is ‘a lot of’ plastic in rivers and oceans, but nobody knows exactly where or how much it is.
Despite many recent efforts, current methods are not sufficient to provide a good overall view on distribution of marine plastics. Much more extensive monitoring is needed with systematic sampling throughout the year. Macroplastics (with sizes >5cm, thus visible to the naked eye) are believed to be a main source of marine plastic pollution and secondary microplastic. Furthermore, most of the volume is associated with these macroplastics and they decay into microplastics, which are much harder to trace and almost impossible to remove. Therefore, it is crucial to detect and remove the macroplastics before they are broken down in smaller and smaller pieces and in this way mitigate the further generation of secondary microplastics for the decades to come. Current methods largely underestimate macroplastics because often they only measure a limited range of sizes. Larger pieces of plastic are much rarer and often remain undetected. To quantify the abundance and size distribution of larger debris, larger areas need to be monitored.
Unmanned aerial vehicles are are widely adopted as tools for surveying water surface and coastal regions, and image analysis is being used to estimate debris concentrations from the resulting imagery. At a larger scale, detection based on satellite images would be the most efficient way of covering larger areas. However current satellite missions are either designed for low resolution ocean colour applications or for land applications so their capabilities are less than ideal for the purpose of marine litter detection.
Detecting and monitoring marine litter in a systematic way is a challenging ambition as vast areas need to be covered with high spatial resolution. Previous research has indicated that for discriminating marine plastics from other surface features, a set of spectral bands combining the visual, NIR and SWIR spectral range is very useful.
Existing and upcoming hyperspectral mission capture all necessary spectral information, but at much lower spatial resolutions. Most high resolution satellite missions offer spectral bands which are not optimal for marine plastic monitoring. With the notable exception of Worldview-3, they even do not offers any high resolutions SWIR bands at all. Furthermore, the swath of high resolution missions is typically very limited (~13 to 20 km at nadir) so they cannot frequently cover very large areas (e.g. 1000km x 1000km swath).
We have started a short term ESA-BELSPO PRODEX study in which we assess the feasibility of detecting marine macroplastics from space. In the study, we first aim to understand in detail the needs of the main stakeholders acting on marine plastics. With this, we will then define minimal requirements on remote sensing data for performing useful macroplastics detection. Next, we will propose a possible satellite mission concept in line with the requirements, and match it with available technology. We foresee to propose an mission concept which combines an innovative imaging payload design with a smart acquisition scheme/method. We will show the results of the feasibility study, including stakeholder analysis and first mission conceptual design.
Remote sensing has the potential to better quantify and identify the sinks and sources of plastic pollution working that way towards solutions providing better understanding, and enabling better policies and novel ways to tackle the problem. However, the use of Earth Observation for plastic detection still presents a series of challenges. Arguably, the main challenges for plastic detection could be summarised in two issues. Firstly, the constrains of satellite pixel sizes to detect macroplastics and secondly how to exploit the spectra to differentiate plastic polymers from algae or other debris. Pixel resolution, spectral bands and signal to noise ration play a crucial part for detecting larger plastics. Only large aggregations of plastics can be detected using satellite technologies or, alternatively, lower altitude methodologies can be used by fitting sensors in aircrafts or Remotely Piloted Aircraft Systems (aka drones) to quantify plastic debris.
This work will present results from the ESA SIMPLER project (SensIng Marine Plastic Litter using Earth observation) and the OSIP ESA HyperDrone project that aim to give solutions to further our capabilities to detect plastics in riverine and shoreline environments respectively. Both projects undertook field and controlled experimental campaigns and created spectral libraries that will be introduced in the talk.
A field campaign was understaken on the shoreline at Oban Airport (UK) and acquired reflectance observations from different dry plastic targets over its mixed sandy and rocky beach using both in-situ instruments (SVC) and the hyperspectral co-aligned Headwall imager covering the VNIR and SWIR regions (400 – 2500 nm). The reflectance from 15 targets with different composition including polystyrene, polypropylene, nylon, PVC, HDPE... were collected at different altitudes of 30, 60, 90 and 120 metres using the Headwall sensor mounted on a drone platform (DJI M600). We assess how spectra in the SWIR can be exploited for plastic detection along the shoreline via the use of algorithms based on different spectral indices and aiming to establish a threshold for subpixel detection at different altitudes on the shoreline.
In addition, a series of controlled laboratory experiments using the SVC hyperspectral spectrometer were undertaken to collect data from those plastic types. The reflectance of each plastic target was collected with the plastics floating on water as well as partially submerged.
Using the measurements collected and the 6S radiative transfer code, the modelled plastic remote sensing reflectance at satellite altitudes will be presented. Based on the spectral features, the algorithms for plastic detection and the modelled results, precise spaceborne requirements will be assessed for satellite plastic missions. These will include characteristics of the spectral bands and spatial resolution to determine the minimum size of plastics for subpixel detection methods as well as estimates of the corresponding signal-to-noise ratio.
The presence of plastic litters in the coastal zone has been recognized as a significant problem. It can dramatically affect flora and fauna and lead to severe economic impacts on coastal communities, tourism and fishing industries. The traditional reporting protocol is organized through individual transects on the beach, recording the litter's presence. In the new era of drone usage, a new integrated Coastal Marine Litter Observatory (CMLO) is proposed. The CMLO produce automatically marine litter accumulation maps in the coastal area, using drone imagery and deep learning algorithms. The aerial images can be collected through a dedicated protocol for acquiring drone imagery from non-experienced citizens using commercial drones. Once the datasets are collected the user can upload them into a web platform where all the preprocessing and analysis occurs. As a first step, the aerial images are automatically checked for their quality, georeferencing and usefulness. Once the dataset is checked a deep learning algorithm runs for detecting the marine litter. Marine litters are classified into seven categories according to OSPAR identification and categorisation of litter on the beaches and their exact position in the beach is recorded. The last step is the marine litter density maps creation. The resulting density maps are produced calculating the number of individual litters in areas of hundred square meters on the beach. The entire process requires some minutes to run once the aerial data is uploaded online. The density maps automatically are reported to a spatial data infrastructure, ideal for time series analysis. The system depicts all the automatically extracted ML as geospatial information related to i) Concentrations (densities) of ML on various beaches, ii) spatiotemporal visualizations of ML accumulation of every uploaded by the user beach iii) Statistical results of marine litter concentrations for every monitored area.
Classification accuracy calculated against manual identification of 85%. In contrast with most recent studies to train the deep learning algorithms, we used a significantly larger training and validation dataset and the evaluation of the deep learning models' generalization ability realized to a completely new beach environment. Thus, CMLO deep learning detection and classification can be geographically transferred to new and unknown beaches. The Coastal Marine Litter Observatory presents several benefits against traditional reporting methods, i.e. improved measurement of the policies against plastic pollution, validating marine litter transportation models, monitoring the SDG Indicator 14.1.1 and the EU – MSFD Indicator D10C1, and most important, guiding the cleaning efforts towards areas with a significant amount of litter. The proposed marine litter mapping approach can be used towards the need for marine litter data standardization and harmonization. CMLO platform allows interoperability and provides a solution for automatic reporting and time series analysis.
Marine plastic litter has become a global problem, affecting the health of marine ecosystems as well as damaging the economy and activities of coastal communities. Due to an increase in the concentrations of plastics in the marine environment and an uncertainty in the plastic sources, pathways and sinks, there is a need to develop a cost-effective, reliable, repeatable, and large-scale monitoring of plastic litter in the coastal waters. Past studies have used machine learning algorithms on high resolution remote sensing data from aircrafts and unmanned aerial vehicles (UAVs) to successfully detect single plastic items. These methods, however, do not offer the large-scale monitoring needed for national or international monitoring programs. Recently, Sentinel-2 optical satellite data has become a primary focus in floating litter research thanks to its global coastal coverage, 5 days revisit time and spectral wavelengths used in floating litter detection. Although Sentinel-2 offers a solution for cost-effective and large-scale monitoring of floating litter, there are research gaps in terms of identification of naturally occurring floating litter events and developing generalised models that will account for variation in the spectral signature of compositionally varied litter across time and space. This study assesses the feasibility of two machine learning algorithms to create a model for detecting floating plastic litter. More specifically, first we employ validated global litter events, which originate either from in-situ measurements or field expert validation, to create training and testing dataset. Secondly, we apply Random Forest classification using eCognition and Python Scikit-learn machine learning library to train the model for both pixel and object-based (OBIA) image analysis to detect floating plastic litter. In addition, the same procedure is applied on WorldView-3 (WV3) high resolution satellite data and conclusions are drawn about the role spatial resolution plays in the accuracy of floating litter detection. Finally, we compare the Random Forest results with the Deep Learning pixel and OBIA approaches using Python Keras API and Tensorflow. The preliminary results show that Random Forest is a successful way of predicting floating plastic if 1) the plastic accumulations are large enough to create objects for OBIA, 2) floating plastic is dense and compact, 3) the quality of Sentinel-2 data is sufficient. The initial findings from Deep Learning show very high model accuracy, however, further testing is required. The study relates the findings to a possible application of machine learning in near-real-time mapping of floating plastic litter. In conclusion, using satellites to detect plastic patches greater than 10 m at a global level has been agreed as an indicator for marine plastic litter under Sustainable Development Goal (SDG) target 14.1 under the United Nations Environment Programme. As such, this work directly supports SDG 14.1, and contributes towards development of harmonised methods to identify and reduce plastic pollution.
Rivers are main pathways of land-based plastic waste in the world’s oceans. Accurate estimates of river plastic emissions are therefore crucial to better understand the sources, mass balance, and fate of marine plastic pollution (Meijer et al., 2021). In contrast to what is often assumed, most plastics that leak into the terrestrial and riverine environment do not end up in the ocean. A growing amount of observational evidence suggests that plastics in fact accumulate within river systems, and can be retained for years, decades, and potentially even longer. The majority of macroplastics (>0.5 cm) in freshwater systems are hypothesized to accumulate on riverbanks and floodplains, in riparian and floating vegetation, within sediments, or in estuaries. Rivers can therefore be considered as plastic reservoirs (van Emmerik et al., in review). Due to the long time scales of the retention, plastics may degrade and fragment into micro- and nanoplastics, which are in turn more likely to be flushed out of the system. Also extreme events, such as coastal or fluvial floods, may lead to emptying of the plastic reservoir. To better understand and quantify plastic accumulation, (re)mobilization, and fragmentation, large-scale and long-term observations are crucial. Multispectral sensors may provide a new avenue of accurate upscaling of plastic observations over time and space. Recent research shows that macroplastics have a clear spectral reflectance signal, which offers new opportunities for detection and monitoring of plastics in riverine and marine environments using close-range and spaceborne remote sensing techniques (Biermann et al., 2020; Tasseron et al., 2021). In this presentation, we discuss how plastics can be discriminated from water, organic material, and other debris from their unique reflectance spectra and derived indices. We give examples of how these findings can be used to detect and quantify plastic pollution across spatial scales. The latter ranges from experiments under controlled conditions, field applications in river systems, and riverine plastic monitoring using multispectral satellite imagery (Schreyers et al., 2021). Here, we specifically focus on applications in river basins in the Netherlands and Vietnam. Finally, we provide an outlook for future work, including using available satellite imagery for historical long-term assessments, and suggestions for future multispectral remote sensing systems for plastic monitoring. With our presentation, we aim to emphasize the importance of harmonized large-scale and long-term plastic monitoring tools to better understand and quantify plastic accumulation within rivers, and emissions into the ocean.
References
Biermann, L., Clewley, D., Martinez-Vicente, V., & Topouzelis, K. (2020). Finding plastic patches in coastal waters using optical satellite data. Scientific reports, 10(1), 1-10.
Meijer, L. J., van Emmerik, T., van der Ent, R., Schmidt, C., & Lebreton, L. (2021). More than 1000 rivers account for 80% of global riverine plastic emissions into the ocean. Science Advances, 7(18), eaaz5803.
Schreyers L, van Emmerik T, Nguyen TL, Phung N-A, Kieu-Le T-C, Castrop E, Bui T-KL, Strady E, Kosten S, Biermann L, van den Berg SJ and van der Ploeg M (2021) A Field Guide for Monitoring Riverine Macroplastic Entrapment in Water Hyacinths. Front. Environ. Sci. 9:716516. doi: 10.3389/fenvs.2021.716516
Satellite remote sensing has great potential to become a breakthrough in mapping marine litter. One limiting factor for its full development is the access to reliable, extensive, and consistent ground truth of debris ground observations. Today, some of the best performing technologies for image analysis were built using open labelled databases. Ocean Scan integrates an inclusive labelled global ocean plastic database, a web platform and a mobile application. With observations significantly more extensive and geographically more diverse than a research campaign, it enables the scientific community to work globally and tackle the problem collaboratively.
During the past two decades, the amount of in-situ data and information about marine litter has dramatically increased, especially in the last five years. News about the different garbage patches and citizen awareness has also led to the publication of online surveying campaigns and mobile apps to collect data. An increasing number of projects and initiatives that address the issue of marine litter worldwide include local in-situ campaigns for litter collection and identification of litter signatures in aquatic environments through the pairing of remote sensing technologies and Artificial Intelligence (AI) techniques. However, despite the growing effort done in this direction, remote sensing research is experiencing a scarcity of relevant validation data. Although in-situ information exists, the datasets available come with several limitations that ultimately reduce their usefulness in the remote sensing application field.
First, information is distributed sparsely in different databases, often not updated, inaccessible or focusing only on a specific area. Second, in-situ data collection of plastics is usually tackled in a project-specific approach and often by non-scientific organizations or teams not working with remote sensing technologies. Consequently, the methodologies used to collect the data are not standardized. The sampling methodologies design rarely considers the requirements to obtain a reliable and robust ground truth that could allow proper AI and Earth Observation research. For example, existing in-situ databases of marine litter often lack accurate geolocation and temporal (date and time) stamp, essential metadata to perform remote sensing studies.
The increasing possibilities of remote sensing technologies in marine litter research can potentially address the identification of debris, the type of litter, pollution sources, distribution patterns, generating a growing demand for curated ground data to develop and validate the different approaches used. It is to address this urgent need that Ocean Scan was born. The Ocean Scan database brings together global in-situ observations and their matching Earth Observation data in one place. It provides a standardized approach with tools to collect and classify in-situ data. It aims to unlock and promote the potential of Earth observation research in marine litter studies, implementing complete data collection methodologies and fostering collaboration between organizations and researchers across the world, relying on a clear code of conduct.
Ocean Scan offers additional features specifically designed to benefit remote sensing researchers. It provides access to a catalogue via a web platform and API. In-situ observations of marine litter are automatically linked with matching satellite images from Sentinel-1, 2 and 3, based on their location and time stamp. Users have access to selectable baseline options in time, space, sensor type and others, and can visualize all existing observations and campaigns on a global map. It is configured to streamline data ingestion from other systems through the dedicated API or directly ingest observational samples during field campaigns through a straightforward and intuitive mobile application.
Ocean Scan users retain full ownership of their data. A Zenodo DOI is provisioned to every dataset ingested, enabling and supporting credit recognition and data provenance and offering early accreditation for work before the long process of paper approval.
Ocean Scan aims at facilitating and promoting global cooperation across marine litter research, offering a powerful tool not only for research but also for organizations working on marine litter and debris. From a downstream application point of view, Ocean Scan: (i) provides the grounds for extensive studies of remote sensing with AI for marine litter detection and tracking around sources, sinks and pathways; (ii) it provides a hands-on and very practical platform to boost EO studies for marine litter detection, greatly facilitating the collaboration between EO scientists with biologists and oceanographers that monitor and study plastic pollution occurrence in-situ; (iii) it offers a unified hub and harmonized data and metadata format, fulfilling the requirements to be used in AI modeling, in terms of standardization, structure, and size, streamlining data collection and data standardization processes; (iv) it offers tools to facilitate the contributions to a global database by migrating existing data from past campaigns and also by easing the data collection for future campaigns; (v) it supports the launch of targeted data rescuing campaigns at sea; (vi) by promoting and boosting remote-sensing studies it accelerates the understanding of the problem, also supporting the design of tailored solutions on the ground and the design of future remote sensing missions.
Floating aquatic vegetation plays an important role in accumulating and transporting macroplastic debris from rivers into the oceans. Up to 78% of floating macroplastic debris was found entrapped in hyacinth patches in the Saigon river, Vietnam (Schreyers et al., 2021). Water hyacinths are an invasive, fast-growing and free-floating plant that typically forms patches of several meters of width and length. This makes them easily detectable by freely openly available imagery, such as the European Space Agency (ESA) satellites. In rivers, hyacinth propagation is a highly dynamic phenomenon, mainly governed by hydrometeorological factors and nutrient availability. Recent studies have shown that hyacinth seasonal and annual coverages can vary by several factors (Janssens et al., 2021, in review), thus highlighting the need for long-term monitoring. Frequent, long-term and large-scale observations are best maximized by using the Sentinel-1 all-weather technology. In this study, we characterized the seasonality and long-term trends in hyacinth invasion in the Saigon river, Vietnam, using Sentinel-1 data over 7 years [2015-2021]. This allowed to estimate yearly and monthly variability in hyacinth coverage over the entire river system.
The 10 m spatial resolution of the Sentinel-1 sensor, however, is unsuitable to detect macroplastic aggregated within hyacinth patches. We therefore coupled our large-scale mapping of hyacinth invasion with close-range remote sensing of macroplastic-hyacinth aggregations. Unmanned Aerial Vehicle (UAV) images were collected over a one-year period (2021) at the Saigon river. We quantified hyacinth coverage daily, weekly and monthly variability and monitored macroplastic concentrations in and out of hyacinth patches. This allowed for comparison with the Sentinel-1 hyacinth monthly and yearly estimates. This multiscalar approach allows to determine the temporal scales at which hyacinth propagation in tropical rivers is best monitored. In addition, this research is preparatory for estimating annual concentrations and fluxes of macroplastic in rivers using hyacinths as a proxy. These findings can be used to inform hyacinth control and macroplastic debris clean-up and mitigation strategies.
References
Schreyers, L., van Emmerik, T., Luan Nguyen, T., Castrop, E., Phung, N.-A., Kieu-Le, T.-C., Strady, E., Biermann, L., van der Ploeg, M. (2021). Plastic plants: Water hyacinths as driver of plastic transport in tropical rivers. Frontiers in Environmental Science 10.3389/fenvs.2021.686334
Janssens, N., Schreyers L., Biermann L., T.-K., van der Ploeg, M., van Emmerik, T. Rivers running green: Rivers running green: Water hyacinth invasion monitored from space. [In review]
Plastic marine litter is becoming an increasing threat to our planet, with considerable impact on the oceans that also receive mismanaged plastic waste from the land and rivers. In oceanic waters, plastic marine litter often accumulates in remote sub-tropical gyres, such as the Great Pacific Garbage Patch. In the recent years, several studies have investigated and described the diversity and complexity of plastic marine litter issue: among the investigated aspects, it has been highlighted that plastic marine litter is mainly distributed in the upper 5 m of the water column, depending on the specific properties of the object. This implies the need to address also techniques able to acquire data in the water column, not only for microplastics ( < 5-mm) but also for less abundant macroplastics ( > 5-mm) below the water surface. Under this respect, backscatter LIDAR technique can offer the advantage of a good penetration into the water column and the possibility to detect also objects that are not floating on the water surface and, in principle, to define their shape and volume. On the other hand, the working principle of the technique does not allow for a discrimination of plastic marine litter from other types of debris.
Despite several papers proposing the backscatter LIDAR technique as a tool with a potential for the detection of marine litter, only very few studies have addressed its feasibility and only sporadic data acquired in real marine scenarios are available in the literature. As a consequence, at present the potential of backscatter LIDAR technique for addressing the marine litter issue in marine scenarios has not been explored.
The present paper aims at providing a contribution towards an understanding of actual capabilities of the technique for marine litter remote sensing. The paper describes an airborne campaign carried out on the Great Pacific Garbage Patch and the data acquired by using a circular scanning backscatter LIDAR system along two transects, each of which 600-km long. The objective of the work is to present the results obtained by processing this extensive dataset of LIDAR data with respect to marine litter detection in oceanic waters, to discuss main advantages and drawbacks as well as to provide a way forward for future experiments.
This study was funded by the Discovery Element of the ESA's Basic Activities, contract no. 4000132184/20/NL/GLC.
The Marlisat project was funded through a Remote Sensing of Plastic Marine Litter competition on the European Space Agency (ESA) Open Space Innovation Platform. The aim is for satellite Earth Observation (EO) to detect the source locations of marine plastics, then trajectory modelling supplemented by information gained from developed floats (that act as a large piece of accumulations of plastic) is used to predict where the plastic will end up. If the plastic returns to the coast as marine litter, then the EO data can be further used to confirm the presence of accumulations.
This presentation focuses on the use of satellite EO to detect the sources and accumulations around the coastline of Indonesia. The baseline detection is performed using a combination of Copernicus Sentinel-1 and -2, utilising an approach developed by Page et al. (2020, https://www.mdpi.com/2072-4292/12/17/2824) for on-land detection of plastic and tyre waste sites. The Machine Learning (ML) approach has been extended to utilise a Neural Network (NNet) instead of or in addition to the original Random Forest approach. A training dataset has been generated from sites around the planet, including legal and illegal waste sites alongside known sites of marine plastic accumulation. This dataset is being continually grown as new sites are identified through web articles or personal communication.
The automated processing code runs the chosen trained ML model over a specified location and time range, creating a map of detected plastic locations. Before applying the ML model, pre-processing is optionally used to reduce false detections from known sources of error that can be site-specific. For example, Indonesia has high cloud coverage with artefacts left behind after conservative cloud masking while detecting waste near the greenhouses in Southern Spain is affected by radar shadow. Ongoing work is focused on achieving a consistent temporal accuracy and approach for how certainty may be quantified going forward, with the first version of the dataset being generated in December 2021.
In addition to using Copernicus data, higher spatial resolution Planet and ICEYE data have been identified and will be used to both confirm the findings and also for testing fused inputs where a higher spatial resolution ML input is achieved by sharpening the Sentinel products.
Abstract
Plastic pollution is one of the largest anthropogenic threats to the marine environment of this century, with plastics representing over 80% of human-made debris present in the oceans . Approximately 12 million tonnes of plastic waste enter our oceans annually, posing a significant threat to marine ecosystems . Although plastics enter the marine environment through riverine and coastal sources or direct disposal, it is widely acknowledged that rivers play a crucial role in the transportation of ocean plastic pollution; acting as the arteries that carry waste from land to ocean. The global contamination of plastic pollution poses an issue for policymakers as it is not constrained by national boundaries, instead it is transported by water and air currents where it congregates at river mouths and coastal cities.
At an international scale, motivation for addressing the issue of plastic pollution is mounting and there are a plethora of agreements relating to maritime sources of plastic waste. However, at present, there is no overarching legally binding agreement addressing the land-based sources of marine litter, particularly with measurable reduction targets to limit future plastic emissions. One limitation on the implementation of such a policy is the absence of a consistent global monitoring capability to ensure compliance with, and monitor the effectiveness, of current regulations, as well as supplying critical information regarding the status of marine litter to support the creation of new strategies. Furthermore, at local and national scales, there has been much development and mobilisation over the last 5-10 years by both for-profit and non-governmental organizations (NGOs) who have focused on the clean-up of plastic waste. These work in a range of geographical regions and are focused on the collection, removal, and management of marine and riverine litter. The ability to prevent and mitigate plastic pollution locally and nationally varies by nation and region and is heavily dependent on resource availability for waste and behaviour change management.
To date few studies have focused on a reliable solution for identifying locally specific dense clusters of plastic waste to target operations. Consequently, a consistent monitoring service could enable these organisations to streamline their cleaning efforts towards the areas with the greatest marine litter densities, enabling them to collect substantial volumes of marine litter whilst reducing running costs, improving efficiency and encouraging positive behaviour change. A robust monitoring system, which is globally applicable, would therefore assist plastic policy at a regional, national, and international level, accelerating the pace and scale of the response to plastic emissions.
Over the last year, CGG Satellite Mapping, supported by the European Space Agency’s Space Solutions, and in collaboration with Mott Macdonald and Brunel University London, conducted a 12-month feasibility study to identify and monitor floating macro to mega marine litter in fluvial and coastal environments using Earth Observation (EO) data. Sustained observation via remote sensing offers distinct advantages for determining the marine plastic debris mass balance due to its extensive area coverage and frequent observation. The ability to detect large aggregations of floating plastics via EO data will support a better understanding of the sources, pathways, and trends of litter in the marine environment, before it becomes entangled, ingested, fragmented, or degraded. The study focused on identifying “hotspot” locations of large aggregations of floating marine litter, monitoring the source location and frequency of accumulations, and analysing the size and distribution of the material. The parameters provide input into local drift models to improve knowledge of the spatio-temporal distribution of floating debris.
The study evaluated the extent to which current and planned remote sensing technology matches the spatial, spectral, and temporal scales required for marine plastic debris observations in river and estuary environments. Three case study locations in Europe, SE Asia and the Caribbean were examined using a range of EO data and processing techniques. The expectation for marine litter (density, location, composition) in each of these settings is different and therefore the resolution, monitoring frequency, spectral range, and platform for data acquisition needs to be specifically targeted for each setting. The suitability of freely available, open-access EO data over each site was assessed, as well as high resolution commercial data as and when required to alleviate problems associated with cloud cover and weather conditions. In addition, in-situ ground truth (e.g. samples or photographs) was also used when available to validate the EO data, with the support of local NGO’s and waste management organisations. The study also analysed environmental data, such as precipitation and wind information, to support the understanding of the movement of marine litter within these environments and transboundary migration of plastics. Synergy with other technologies, such as higher resolution drones or HAPS, can be helpful to initially locate and identify small marine litter accumulations that can be subsequently monitored using EO systems.
With close engagement from a broad group of end users, CGG plan to develop a marine litter monitoring system to support local waste management programs, increase awareness and provide feedback on the long-term effects of environmental waste management initiatives in river and estuary environments. The satellite-derived system has the potential to complement and work in tandem with international policy efforts to combat marine plastic pollution and provide a transboundary solution to a global problem. It is hoped the system will assisting monitoring the specific, measurable, and time-bound targets set by the international community to reduce plastic emissions into the marine environment.
Most progress in remote sensing of marine plastic litter has been made in ocean colour sensing (OCS) where optical physics applies. It has become clear that OCS, based on surface reflectance of sunlight, would benefit from complementary measurements using different technologies. Surface-leaving thermal infrared radiance (TIR) has significant potential for monitoring macroplastic floating on the water surface. For example, TIR sensing does not depend on sunlight and can look through light snow and rain. Plastic materials that are transparent or a dark colour are difficult to see in the optical spectrum but may appear opaque and easier to detect in the thermal spectrum.
We will show the results of drone surveys flying a FLIR (forward-looking infrared) camera over different plastic litter targets floating at sea. The FLIR camera senses in long-wave infrared (LWIR), 7.5 – 13.5 μm. We performed surveys during day and night, and in summer as well as winter, to cover a range of temperature and light conditions. During daylight surveys, visible and near-infrared cameras recorded concurrently for comparison. The resulting datasets show the potential for using thermal imaging to monitor floating plastic, especially for night-time when optical wavelengths are ineffective without a light source. Dependence on the different temperatures of the plastic targets, air and the sea, and cloudiness of the sky, complicates interpretation of thermal images. Consequently, some locations, seasons and times of the day will be better suited to TIR sensing of floating plastic litter than others. These may well be under conditions where OCS is limited, for example during the Arctic winter.
The methodology applies to plastic litter floating on top of the water surface, as water absorbs LWIR in the first mm. It will therefore not be suitable to monitor plastic debris below the water surface. The method was tested in coastal waters and is proposed as transferable to freshwater bodies such rivers and lakes, and snow and ice. Future investigation will test TIR sensing on beached plastic litter pollution to investigate performance over shorelines and further study of the capability to differentiate between different kinds of floating matter. The presented techniques provide improved detection of floating plastic litter from aerial or drone-based measurements, and applicability to inform potential future satellite-based TIR remote sensing, as ground resolution improves.
Rivers function as major pathways of transport of plastic litter, from land-based sources into the ocean. Efforts to quantify riverine plastic inputs and fluxes are increasing but are currently hindered by limited observations. In the downstream travel of floating plastic, it accumulates at places where the flow is locally reduced, to form larger patches. In this study we investigate such accumulations of floating debris using images from different satellite sensors. Applying this monitoring method to rivers enables us to detect plastic litter before it reaches the oceans.
Although current satellite mission concepts were not specifically designed for the detection of plastic debris, there is potential for some sensors to be utilized in the detection of plastics. With support from the ESA Discovery Campaign, we have access to the range of different satellite sensors needed to develop a multi-sensor monitoring method for detecting floating plastic litter. These satellite datasets, together with the data that we are collecting in an onsite clean-up initiative, help to fill-in the lack of observation data that are needed to advance data science techniques for plastic detection. Time-lapse images and photographs in combination with waste sampling at the accumulation hotspot site enable characterisation of percentage areal coverage of floating debris on the water surface and waste composition.
First results comprise a prototype software for extracting statistics on floating debris based on time-lapse camera images. The underlying algorithm uses the difference in brightness between the background water surface and the floating debris objects to be detected. The software produces a video highlighting debris passing through a selected zone within the frame, and generates statistics on the total number of frames used for the analysis, the total number of debris items, total area (in pixels or, if geo-referenced, in areal units), maximum flux of debris items and maximum areal flux of debris items.
The in situ results support the development and improvement of the debris detection from satellite data and the validation of the resulting plastic maps. Ultimately, these techniques will enable us to validate model estimates of riverine fluxes, to improve our understanding of global riverine plastic fluxes from source to sea, and to contribute to integrated assessments of the state of pollution. Another outcome will be recommendations for informing ESA’s future satellite missions by utilizing our results of the plastic detection capacity of existing sensors to inspire future mission design.
Water hyacinths play an important role in gathering and transporting macroplastic litter in riverine ecosystems. These fast-growing, free-floating, and invasive freshwater plants tend to form large patches at the water surface, which makes it possible to detect and map them in freely available imagery collected by the European Space Agency (ESA) satellites. In polluted rivers, hyacinth patches may thus serve as a viable proxy for macroplastics. However, at the∼10m spatial resolution offered by the Sentinel-1 and Sentinel-2 satellites, it’s not possible to discriminate smaller items of plastic caught up within large plant patches.
For the first time, we demonstrate that river plastics are detectable in higher spatial resolution optical satellite data. In the Saigon River around Ho Chi Minh City, plastic debris was successfully detected within hyacinth patches using MAXAR’s Worldview-3 multispectral optical (∼1.24) and panchromatic imagery (0.31m). For the optical data, we selected the ACOLITE atmospheric correction algorithm and applied a novel detection index that leveraged Worldview's near infra-red and red bands to highlight differences between vegetation, debris, and river water. Applying a local normalization (moving average) over the scene also served to reduce contributions from the highly turbid background water. This approach allowed for detection of river plastics within hyacinth patches floating downstream from populated areas towards coastal waters.
COLOR (CDOM-proxy retrieval from aeOLus ObseRvations) is an on-going (kick-off: 10/3/2021) 18-month feasibility study approved by ESA within the Aeolus+ Innovation program. The objective of COLOR is to evaluate and document the feasibility to derive an in-water AEOLUS prototype product from the analysis of the ocean sub-surface backscattered component of the ultraviolet (UV) signal at 355 nm. In particular, COLOR focuses on the AEOLUS potential retrieval of the diffuse attenuation coefficient for downwelling irradiance at 355 nm (Kd(355)). As Kd(355) is highly sensitive to the absorption due to CDOM (Chromophoric Dissolved Organic Matter), it can be used as a proxy to this variable, which contributes to regulate the earth’s climate.
To assess the quality of in-water Kd(355) coefficients retrieved from AEOLUS, the largest currently-available database of in situ UV radiometry distributed across the global ocean is that provided by the Biogeochemical (BGC)- Argo array. BGC-Argo floats provide autonomous measurements of downwelling irradiance (Ed) at 380 m of the upper 250 m of the ocean, every 1 to 10 days. These profiles are quality-checked with specifically-designed procedures, and then Kd(380) coefficients are derived. In COLOR, seven areas representative of a variety of trophic and optical conditions have been identified to carry out product validation: 1) North Atlantic subpolar gyre; 2) North Atlantic subtropical gyre; 3) South Atlantic subtropical gyre; 4) Black Sea; 5) North Western Mediterranean Sea; 6) Levantine Sea (Mediterranean Sea); 7) Southern Ocean – Indian sector. These areas are also representative of the global distribution of CDOM, and meet diverse meteorological conditions (e.g., cloudiness) that could impact AEOLUS’s data availability and retrievals.
Being BGC-Argo radiometry data in the selected areas available since 2012, two validation strategies will be applied in COLOR:
a) match-up analysis between BGC-Argo and AEOLUS, for the period beyond 2019;
b) climatological comparison of AEOLUS and BGC-Argo Kd data, i.e., including a statistically significant number of observations for each area that encompasses the whole expected seasonal variability.
Preliminary results of the validation of AEOLUS CDOM-proxies will be here presented.
Marine Heat Waves (MHW) can impact on marine organisms directly affecting their optimal thermal ranges and indirectly via changes in ocean biogeochemistry. Marine ecosystems may as a result modify their functions, with their resilience not assured. Space-borne observations provide a unique tool to detect such extreme events and observe changes in sea surface biology. Developing synergies with robotic autonomous observations from Biogeochemical (BGC)-Argo floats will allow monitoring marine ecosystems and ocean biology before, during and after a MHW from the surface down to the ocean interior.
Under the ESA’s Ocean Health initiative, the “deteCtion and threAts of maRinE Heat waves – CAREHeat” project will evaluate changes and resilience after MHW events of marine biodiversity and biogeochemistry of pelagic ecosystems around the globe, including lower to higher trophic levels. To achieve this, CAREHeat will exploit BGC-Argo optical observations, Ocean Colour satellite measurements and biogeochemical models. In particular, CAREHeat will develop synergies between these multiple observational platforms to address the following scientific questions:
1. What is the effect of MHW on phytoplankton chlorophyll concentration at the ocean surface and along the water column?
2. Are phytoplankton chlorophyll changes related to modifications in phytoplankton biomass or physiology?
3. How is the phytoplankton community structure affected by MHW?
4. How do community structure changes impact on ocean biogeochemistry and propagate over the water column affecting nutrient profiles and oxygen levels?
5. How do changes at lowest trophic levels impact on carbon fluxes in support of higher trophic levels (micro-nekton, apex predators)?
6. Is there a biogeochemical signature in the pH and air-sea CO2 fluxes during all MHW events, and to what degree of MHW severity is such signature becoming significant?
We will present preliminary results on detected MHW events at the global scale, and the observational strategies and datasets we will adopt to assess the impacts on marine ecosystems.
The profiles of light and its spectral distribution link to most of the physical, chemical, and biological processes prevailing in the water column. Here, we present vertically resolved light models of downwelling irradiance (ED) and photosynthetically available radiation (PAR) for the global ocean by merging light profiles with satellite ocean color radiometry products and physical (temperature and salinity) properties prevailing at the location of light profiles. The present work is inspired from the SOCA (Satellite Ocean-Color merged with Argo data to infer bio-optical properties to depth) methodology originally proposed by Sauzède et al. (2016). SOCA is based on an artificial neural network methodology, and more especially on a Multi-Layer Perceptron (MLP). The present light models rely on SOCA type MLP and are trained with light profiles (ED/PAR) acquired from the Biogeochemical (BGC) Argo floats as outputs. The inputs of the MLP consist of surface derived from satellite ocean color radiometry products extracted from GlobColour (Rrs, PAR and kd490), temperature and salinity profiles from BGC-Argo as well as temporal components (day of the year and local time in cyclic transformation). The output for each model corresponds to ED profiles at the three wavelengths of BGC-Argo measurements (380, 412, and 490 nm) and PAR profiles.
The quality of the light profile retrieval by these models is assessed using two different and independent datasets: one is based on independent BGC-Argo profiles that are not used for the training; the other originates from SeaBASS. These light models show satisfactory predictions when compared with real measurements. The estimated accuracy metrics for these two validation datasets are consistent and demonstrate the robustness of these light models for global ocean applications. More details and procurements of this study will be discussed during the presentation.
Keywords: Global Ocean light models, ED380, ED412, ED490, PAR
The NASA-led EXPORTS (EXport Processes in the Ocean from RemoTe Sensing) project seeks to quantify the fate and export of carbon from the euphotic zone via the biological carbon pump. The strength of the biological carbon pump can be assessed in part by the rate of net community production (NCP), the sum of all photosynthetic productivity minus respiratory losses of carbon. In a net autotrophic system, this excess fixed carbon is available for export to the deep ocean, where it can be sequestered from the atmosphere on decadal to millennial time scales. Two field campaigns were conducted to capture the end members of a range of ecosystem/carbon cycling states: the productive North Atlantic spring bloom in May 2021, and the iron-limited subarctic North Pacific in August 2018. Ship-based operations were bolstered by both satellite observations and numerous autonomous assets, including two BGC-Argo floats at each site, supported by NSF, NOAA, and NASA. The floats carry biogeochemical sensor suites (e.g. CTD, O2, NO3, pH, bio-optics) to enhance the spatiotemporal sampling range and produce budgets of oxygen, nitrate, and particulate organic carbon. Here we present a comparison of NCP measured in situ by BGC-Argo floats to satellite- and hybrid float-satellite-based estimates of NCP during the EXPORTS field campaigns. Float-derived NCP employs a mass balance approach using high-resolution oxygen and nitrate data collected by autonomous floats to determine NCP in the euphotic zone. Satellite-based estimates of NCP are made using algorithms trained on the oxygen-argon ratio anomaly that utilize observed sea surface temperature and modeled net primary productivity. Net primary productivity rates were determined via the Vertically Generalized Production Model (VGPM), the Carbon-based Production Model (CbPM), and the Carbon, Absorption, and Fluorescence Euphotic-resolving Model (CAFE) algorithms. These model algorithms are implemented with both satellite-only and integrated float-satellite inputs to explore the potential of a synergistic approach between BGC-Argo and remote sensing capabilities. We discuss how our results compare across estimation methods, link to the ship-based measurements made during the field campaigns, and how they reflect the distinct nature of the study sites’ cycling regime.
Oceanic mesoscale eddies account for more than 40% of the ocean surface, playing an important role in marine mass transport, energy exchange, and air-sea coupling. On account of their vital effects, mesoscale eddies have attracted lots of attention from oceanographic scientists. As an increasing noticed topic, mesoscale eddies’ biological role at the sea surface has been revealed gradually. Benefiting from the emergence of Biogeochemical Argo (BGC Argo) floats, it becomes possible to explore eddies’ biological influences at the subsurface, which contributes an increasing number of studies about eddies’ biology in recent years. A crucial result that anticyclonic eddies (AEs) and cyclonic eddies (CEs) can induce contrasting sea surface chlorophyll (CHL) anomalies has been revealed from a global point of view, also in a few regional open oceans, such as the Pacific and the Southern Ocean. As eddies’ biological effects in the North Atlantic are not understood enough, our study focuses on eddies’ influences on phytoplankton and zooplankton by analyzing observations from satellites, BGC Argo, and cruises comprehensively. The results derived from multi-satellite merged Ocean Colour CCI products show that eddy-induced sea surface CHL anomalies vary with latitude in the North Atlantic, related to eddy properties. At the surface, eddies’ effects on CHL at the subtropical and mid-latitude regions are primarily driven by Ekman pumping and eddy pumping respectively. Results derived from BGC Argo illustrate that both Ekman pumping and eddy pumping are obvious in the midlatitude subsurface water. While at the subtropical region, eddy pumping is dominant in the subsurface water. Statistics from BGC Argo also illustrate AEs/CEs tend to decrease/increase the subtropical chlorophyll maximum (SCM), and lower/raise its depth both in subtropical and midlatitude regions. Continuous Plankton Recorder (CPR) data reveals eddies’ influences on zooplankton, with the abundance of copepods in CEs higher than in AEs at daytime, consistent with eddies’ surface CHL concentrations. Besides, the diel vertical migration (DVM) of copepods is found more evident in AEs than in CEs. Particle backscattering observations from BGC Argo illustrate the obvious DVM in AEs may not be related to the CHL concentrations at the subsurface, but is an active choice of zooplanktons. Comprehensively analyzing satellite observations, BGC Argo profiles, and CPR samples, this study reveals eddies’ effects on plankton in the North Atlantic, which deepens our understandings of eddies’ biological role.
Since year 2000, 2 million temperature and salinity profiles have been collected by the Argo program with unprecedented spatial and temporal coverage. Since 2010, thanks to a new generation of profiling floats (e.g. BGC Argo floats equipped with chlorophyll_a, downwelling irradiance, backscattering, nitrate, optode ...) and in particular to iridium telemetry, the acquisition frequency has dramatically increased, providing a new understanding of the dynamics of the float displacement in the water column.
The possibility to extract information related to sea state from the analysis of high-resolution measurements of pressure data linked to float motion is investigated here. Particular focus is put on the study of the speed anomaly close to the surface, compared to a nominal speed expected for a calm sea state. The comparison between speed anomalies of floats in the Mediterranean Sea and concurrent sea state measurements provided by a weather buoy in the same area, implies that float behaviour could be an indicator of sea state.
This relationship, applied to the Argo database, offers the unique opportunity to have an in-situ estimation of the sea state with the spatial and temporal coverage of the Argo database, with possible application for Earth observation satellites like SENTINEL-3. The Significant Wave Height (SWH) and wind speed from SENTINEL-3 or from the different altimetry satellites (TOPEX, JASON-1, ERS-2, ENVISAT and GFO – GEOSAT) will be compared to sea state proxy extracts from the Argo floats.
The first decade of Argo will also be examined in order to determine whether extreme weather events can be detected despite the low vertical resolution of the data acquired over this period.
Finally, results will be presented from floats equipped with an inertial unit which measures the tilt and rotation of the float. How this type of sensor, easily implemented on Argo floats, can improve the sea state estimation compared to the previous method, will be investigated.
Evaluating the spatio-temporal distribution of phytoplankton is critical for assessing the impact of climate change on the marine biogeochemistry and food web (Fennel et al., 2019), along with the ocean-atmosphere exchanges and carbon cycle (Falkowski, 2012). Phytoplankton abundance and composition (as indicated by chlorophyll-a concentration), which are essential for estimating primary production, can be detected and quantified using optical sensors (Bracher et al., 2017). These can be operated on ship-towed undulators, ship-based inline systems (e.g., Bracher et al., 2020) or autonomous platforms such as satellites (e.g., Mouw et al., 2017) and profile floating (e.g., BGC-Argo, see Sauzède et al., 2015). Combining these disparate data sources remains a major difficulty due to varying temporal and spatial resolution and insufficient definition of uncertainty. Data fusion, feature extraction, and other machine learning approaches have been successfully used to overcome the aforementioned issue in different applications such as urban area mapping and change detection(Palubinskas and Reinartz, 2011; Palubinskas, 2012). Accordingly, the objective of this study is to develop a complete data processing chain for combining various Phytoplankton Functional Type (PFT) datasets and associated uncertainties at various spatial and temporal scales. The research focuses on data acquired during the RV Polarstern PS113 expedition (10 May to 9 June 2018) along the Atlantic transect from the Patagonian shelf to the English Channel. These datasets consist of: 1) PFT retrieved from ship-towed vertical undulating Radiometer (Bracher et al., 2020), 2) PFT retrieved from AC-S flow-through sensor (following Liu et al., 2019) and 3) full resolution Sentinel-3 OLCI PFT retrieved by Xi et al. (2021) algorithm. The applied PFT retrieval technique on these datasets is the spectral feature extraction by decomposing the spectral data based on empirical orthogonal functions (EOF) for PFT chlorophyll-a concentration estimation (Bracher et al., 2015). The final product has a spatial resolution of around 300 meters and a temporal resolution of about 3 days. This study emphasizes the potential of developing synergy between space-based ocean observations and in situ biogeochemical sensors. The generic chain process developed can be applied to similar sensor data from expeditions, profiling floats or gliders. Future research should be focused on improving the temporal resolution, reducing the uncertainty and introducing depth information as the fourth dimension into this product.
Satellite Ocean Colour Radiometry (OCR) is an unprecedented tool to understand marine ecosystems and monitor their response to climate change at global scale. Nevertheless, this tool requires high quality in-situ data for calibration and validation and is also limited to the observation of near surface waters. In this context, the BioGeoChemical (BGC) Argo network has demonstrated its power, firstly, to produce radiometric data useful for Cal/Val activities, and secondly, to produce additional data which can be used as a complement to increase the potential of satellite observations or to allow their extension at depth to obtain a 3D view of the ocean. In this context, the recent integration of the first hyperspectral irradiance sensor on BGC-Argo profilers will create a new field of synergy with space-based measurements. For Cal/Val activities, the use of hyperspectral sensors will allow the production of validation data in line with current (ASI’s PRISMA) and future (e.g., NASA’s PACE) satellite products. Hyperspectral data can also be used, in conjunction with multispectral satellite data, to improve the identification capabilities of phytoplankton groups at surface and at depth with associated societal impacts in the context of climate change, resource management and biohazard surveillance (i.e., harmful algal blooms).
We will present here the technical aspects of the integration of the Ramses sensor (Trios GmbH) on BGC-Argo profilers with a particular focus on the energy aspects (impacting the lifetime of the profiler), the acquisition frequency and the resulting volume of data (impacting the data quality but the operational cost). Initial results will be presented for floats deployed in the Mediterranean and Baltic seas as well as a first method to carry out quality control of these data will be shown. The quality of the data obtained will be compared to other existing radiometric sensors mounted on the same floats and in particular to the OCR500 sensors (Sea-Bird Scientific) used today on the global array of BGC-Argo profilers . A first inter-comparison of the results obtained in the two deployment areas will be presented and discussed. Finally, we will present the perspectives of such a sensor for the BGC-Argo network and the synergy that would result from it for space applications. In particular, we will present the possibility of equipping floats with two Ramses sensors to measure downwelling irradiance (Ed) and upwelling radiance (Lu) in order to obtain hyperspectral reflectance.
Since climate change is directly impacting the Arctic, landscapes underlain by permafrost are warming and experiencing increased thaw and degradation. The increased warming of organic-rich frozen ground is projected to become a highly relevant driver of greenhouse gas into the atmosphere. Retrogressive Thaw Slumps (RTS) are dynamic thermokarst features which result from slope failure after ice-rich permafrost thaws. Active RTS are characterized by steep headwalls up to tens of meters high and dynamic slump floors that can be several hectares in area, mobilizing thawed sediments, carbon, and nutrients into downstream en-vironments. While they are small-scale features they can reach considerable annual growth rates, impacting the immediate surrounding abruptly and irreversibly. Thousands of RTS have been inventoried in northwestern Canada associated with regions where buried glacial ice is melting in thawing permafrost. These inventories showed that thaw slumping substan-tially modifies terrain morphology and alters the discharge into aquatic systems, also result-ing in infrastructure instabilities and ecosystem changes. Most RTS occur along coast- and shorelines leading to changes in optical and biogeochemical properties of aquatic systems which can have severe consequences on the aquatic food web. Furthermore, recent studies revealed increased temporal thaw dynamics of RTS in northern high latitudes and projected that abrupt thermokarst disturbances contribute significant amounts of greenhouse gas emissions.
As observed in most Arctic regions, RTS have been developing in the Russian High Arctic. However, research on RTS here has been focusing on northern West Siberia, where industri-al development required mapping of potential landscape hazards resulting from permafrost thaw. In most other regions of the Russian High Arctic, RTS occurrence and distribution is poorly known so far. The objective of this study is to better understand growth patterns and development rates of RTS at high temporal resolution in Arctic Russia using remote sensing data for the last decade (~2013 to 2020).
We investigated five different sites comprising hundreds of square kilometers in the contin-uous permafrost zone of the Russian Arctic. Our sites are located on Novaya Zemlya, Kol-guev Island, Bolshoy Lyakhovsky Island and Taymyr Peninsula. To investigate changes in RTS numbers and extent, a GIS-based inventory of manually mapped RTS was created. The in-ventory is based on multispectral imagery of very high-resolution satellite sensors, including PlanetScope, RapidEye, Pleiades and Spot. Cloud-free images were obtained between 2013 and 2020, for each or every few years depending on image availability. Additional datasets such as the ArcticDEM, ESRI Satellite basemap, and Tasseled Cap Landsat Trends were used to support the mapping process. From the extracted polygons, changes in RTS number and surface area were calculated. Beside this, for coastal slumps thermal denudation and ther-mal abrasion rates were computed using the DSAS tool in ArcMap.
First results provide evidence that the inventory allows to quantify the planimetric devel-opment of RTS at the studied sites over time and further show that thaw slumps have be-come in most cases more active, increasing in size and number, in recent years. At Kolguev we retrieved thaw slumping rates along two sections of the west coast. The further north located slumps reveal average thermal abrasion rates of 1.3 m/yr and average thermal den-udation rates of 3.9 m/yr between 2013 and 2020. For the same time period we discovered that the further south located slumps show average thermal abrasion rates of 5.2 m/yr and average thermal denudation rates of 2.9 m/yr. We will report rates for all sites and compare these with respect to various environmental settings. Our approach gives a first insight about the variability and magnitude of slumping observed across the diverse settings in the Russian High Arctic permafrost region.
The data will contribute substantially to our understanding of regional permafrost thaw in the Russian High Arctic and will be further useful to identify local thaw dynamics and possi-bly permafrost characteristics. Our data also allows us to examine the volumetric loss of sed-iments, ice, and carbon associated with abrupt permafrost thaw by RTS which is crucial for the assessment of greenhouse gas emissions. In addition, the dataset provides valuable ground truth information for training and validation of Deep Learning Approaches for map-ping RTS.
Permafrost is a key indicator of global climate change and hence considered an Essential Climate Variable (ECV). Current studies show a warming trend of permafrost globally, which induces widespread permafrost thaw, leading to near-surface permafrost loss at local to regional scales and impacting ecosystems, hydrological systems, greenhouse gas emissions, and infrastructure stability. Especially the understanding of abrupt, rapid permafrost thaw dynamics, unfolding within merely a couple of days to years and impacting the landscape irreversibly, such as thermokarst formation, lake drainage, and retrogressive thaw slumps, are of high relevance as their projected greenhouse gas emissions, including methane and carbon dioxide, are substantial.
Permafrost is defined as the thermal state of the subsurface but is greatly influenced by changes in the surface state, which is tightly connected to the atmosphere, biosphere, geosphere, and cryosphere by topography, water, snow and vegetation. Hence, examining changes in the surface state will help to identify regions that are particular vulnerable to permafrost thaw. Our primary aim is to investigate changes in the surface state by assessing positive and negative feedbacks to the surface state that potentially influence permafrost and thus derive an index for permafrost vulnerability to thaw.
Earth observation (EO) based datasets provide great opportunity to analyse relevant variables impacting the surface state and obtain trends and changes from long-term consistent datasets. Relevant variables for assessment are land surface temperature, land cover, snow cover, fire, albedo, soil moisture, and information on the freeze/thaw state, which are all ECVs as well, and are available globally following ESA CCI and GCOS product developments. Furthermore, two modelled permafrost_cci products are available for comparison: ground temperature and active layer thickness. However, so far, a combined assessment of these products to better understand, quantify, and project permafrost changes and trajectories is still missing.
Therefore, the objective of this ongoing project is to develop a permafrost vulnerability framework which focuses on the surface state, including the above listed ECVs. By conducting spatiotemporal variability analyses of the individual ECVs, correlation assessments among them, and decadal trend analysis, a better understanding of their positive and/or negative feedbacks will be established. Combining the feedback results of the ECVs in a vulnerability assessment will help identifying prevailing trends in the surface state and evaluating consequences for the thermal state of the permafrost.
Preliminary results show that the individual ECVs show differing trends in the spatiotemporal variability analysis, indicating positive and negative feedbacks. The results will be incorporated in a circumpolar Arctic permafrost vulnerability assessment, integrating the coupled feedbacks and determining their combined effect on the thermal state of the permafrost.
The resulting new permafrost vulnerability index will give a more comprehensive and spatially detail-rich understanding of circumpolar permafrost vulnerabilities and their magnitude. It will indicate areas that are particularly vulnerable to experience thaw and hence highlight areas of particular importance for close monitoring. The circumpolar Arctic permafrost vulnerability index dataset will be a great foundation for a wide range of permafrost-thaw focus studies, such as hydrological change, infrastructure stability, ecosystem change or greenhouse gas emissions, as well as useful for qualitatively assessing the permafrost-climate feedback.
With the Earth’s climate rapidly warming, the Arctic represents one of the most vulnerable regions to environmental change. Permafrost, as a key element of the Arctic system, stores vast amounts of organic carbon that can be microbially decomposed into the greenhouse gases CO2 and CH4 upon thaw. Extensive thawing of these permafrost soils therefore has potentially substantial consequences to greenhouse gas concentrations in the atmosphere. In addition, thaw of ice-rich permafrost lastingly alters the surface topography and thus the hydrology. Fires represent an important disturbance in boreal permafrost regions and increasingly also in tundra regions as they combust the vegetation and upper organic soil layers that usually provide protective insulation to the permafrost below. Field studies and local remote sensing studies suggest that fire disturbances may trigger rapid permafrost thaw, with consequences often already observable in the first years post-disturbance. In polygonal ice-wedge landscapes, this becomes most prevalent through melting ice wedges and degrading troughs. The further these ice wedges degrade; the more troughs will likely connect and build an extensive hydrological network with changing patterns and degrees of connectivity that influences hydrology and runoff throughout large regions. While subsiding troughs over melting ice wedges may host new ponds, an increasing connectivity may also subsequently lead to more drainage of ponds, which in turn can limit further thaw and help stabilize the landscape. Whereas fire disturbances may accelerate the initiation of this process, the general warming of permafrost observed across the Arctic will eventually result in widespread degradation of polygonal landscapes. To quantify the changes in such dynamic landscapes over large regions, remote sensing data offers a valuable resource. However, considering the vast and ever-growing volumes of Earth observation data available, highly automated methods are needed that allow extracting information on the geomorphic state and changes over time of ice-wedge trough networks.
In this study, we investigate these changing landscapes and their environmental implications in fire scars in Northern and Western Alaska. We developed a computer vision algorithm to automatically extract ice-wedge polygonal networks and the microtopography of the degrading troughs from high-resolution, airborne laserscanning-based digital terrain models (1 m spatial resolution; full-waveform Riegl Q680i LiDAR sensor). To derive information on the availability of surface water, we used optical and near-infrared aerial imagery at spatial resolutions of up to 5 cm captured by the Modular Aerial Camera System (MACS) developed by DLR. We represent the networks as graphs (a concept from the computer sciences to describe complex networks) and apply methods from graph theory to describe and quantify hydrological network characteristics of the changing landscape.
Due to a lack of historical very-high-resolution data, we cannot investigate a dense time series of a single representative study area on the evolution of the microtopographic and hydrologic network, but rather leverage the possibilities of a space-for-time substitution. We thus investigate terrain models and multispectral data from 2019 and 2021 of ten study areas located in ten fire scars of different ages (up to 120 years between date of disturbance and date of data acquisition). With this approach, we can infer past and future states of degradation from the currently prevailing spatial patterns and show how this type of disturbed landscape evolves over time. Representing such polygonal landscapes as graphs and reducing large amounts of data into few quantifiable metrics, supports integration of results into i.e., numerical models and thus largely facilitates the understanding of the underlying complex processes of GHG emissions from permafrost thaw. We highlight these extensive possibilities but also illustrate the limitations encountered in the study that stem from a reduced availability and accessibility to pan-Arctic very-high-resolution Earth observation datasets.
The Essential Climate Variable (ECV) “Permafrost” is characterized by the variables “ground (subsurface) temperature” and “thaw depth”, i.e. the maximum depth of the seasonal thaw layer. The Permafrost_CCI project by the European Space Agency (ESA) has compiled Earth Observation (EO) based products for the permafrost ECV spanning the last three decades. As ground temperature and thaw depth cannot be directly observed from space-borne sensors, we have ingested different satellite and reanalysis data sets in a ground thermal model, which makes it possible to quantify permafrost state changes in Arctic and high-mountain environments.
The Permafrost_CCI algorithm uses remotely sensed data sets of Land Surface Temperature (MODIS LST) and landcover (ESA Landcover_CCI) to drive the transient permafrost model CryoGrid_CCI at 1km spatial resolution. To gap-fill LST time series and account for the influence of the seasonal snow cover, ERA-5 reanalysis data are employed. Furthermore, ERA-5 reanalysis is used to force the model for the period before 2003 when MODIS LST is not fully available, but we apply a pixel-by-pixel bias correction using the overlap period after 2003-2019 to achieve coherent times series. The correct representation of ground properties is critical for the performance of the transient algorithm, in particular for reproducing the depth of the thaw layer. Therefore, the Permafrost_CCI project has synthesized typical subsurface stratigraphies for the different CCI landcover classes, based on a large number of analyzed soil pedons from different permafrost areas. Finally, the Permafrost_CCI algorithm performs not only a single run per pixel, but simulates subpixel variability with an ensemble accounting for the typical variations in snow depth and ground stratigraphies. From the model ensemble, it is possible to infer the fraction covered by permafrost in every 1km pixel and thus reproduce the well-known zonations of sporadic, discontinuous and continuous permafrost. We report on the performance of the year 3 product, validated by a variety of field observations. Finally, we discuss the possibility to improve the performance by ingesting further satellite products, such as remotely sensed snow covered area, in the processing chain.
C-band SAR observations have been proven of high value for characterization of spatial patterns of wetlands and soil organic carbon content across the Arctic. Specifically of interest are acquisitions under frozen conditions as they reflect surface roughness and volume scattering only what can be combined with multispectral data such as Sentinel-2 for enhanced landcover descriptions (classifications of vegetation communities and wetlands, vegetation height retrieval etc.).
A range of disturbance factors which cause uncertainties in the retrieval have been, however, identified. This includes meteorological conditions during the acquisition which lead to changes in the overlying snowpack in winter and other disturbances before the acquisition (at unfrozen conditions) related to natural hazards, including fires and landslides.
Several sites across the Arctic have been selected to quantify the impact of disturbances on the retrieval of landcover and soil characteristics. They cover continuous to discontinuous permafrost. Preacquisition disturbances are considered starting from the 1980s. Focus is on Sentinel-1 following a space for time concept. Results demonstrate that there is considerable influence what needs to be considered for carbon cycle related landcover characterization across the Arctic.
Earth System Models to predict permafrost degradation are currently considering gradual permafrost changes only. However, various rapid permafrost thaw processes are known from field and remote sensing studies. Especially over the past decade, an increase in rapid permafrost degradation has been observed with temporally and spatially high-resolution remote sensing. These processes involve loss of excess ground ice, surface subsidence and erosion, and unlock so far frozen soil carbon. Under accelerating climate warming, it is therefore important to understand and quantify the underlying dynamics in space and time and work towards integrating such observations and process models to enhance climate predictions. A major obstacle in integrating rapid permafrost thaw processes in Earth System Models is the spatial confinement and abrupt initiation of thermokarst and thermo-erosion, which are difficult to predict due to the complex and not yet fully understood interactions between different processes and the underlying spatial heterogeneities. Further complexity is caused by the resulting landscape changes altering drainage patterns and vegetation growth and inducing accelerating and decelerating feedback mechanisms. Including all process interactions at the required resolution and scale in a full 3D transient model is therefore infeasible and model simplifications are required. Combining remote sensing and modelling can help in two ways (i) to understand and define relevant processes and parameters and (ii) to fine-tune the model setup and parametrization for reproducing thaw induced landforms.
Here we present a preliminary study and conceptualization of combining remote sensing and permafrost modelling using the permafrost landscape model CryoGrid. The goal is to better understand and predict the impact of rapid permafrost degradation processes applied to the New Siberian Islands in the Russian High Arctic. Until now, only few studies have covered the New Siberian Islands due to their remote location and the consequent lack of available ground truth data as well as the reduced amount of remote sensing data due to frequent cloud coverage. However, similar as observed on other High Arctic Islands in Canada, where mean air temperature increase was amplified due to extensive sea ice loss and resulted in widespread decay of near-surface permafrost, the New Siberian Islands are expected to have also been affected by substantial warming and permafrost thaw over the last decade. The recent increase of available remote sensing data has therefore made it an area of particular interest for studying permafrost degradation. Predominant permafrost degradation landforms found on the New Siberian Islands include retrogressive thaw slumps along coastal sections and melt ponds on ice-rich Yedoma uplands drained by a network of thermo-erosion gullies and embedded in degraded, Baydzharakh-patterned, slopes. We aim at using these rapid thaw landforms to quantify rates of thaw at two different scales: (i) at the process scale of thaw slumps, we plan to better understand and predict volumetric loss of ice content and (ii) at the catchment scale, we will study the interaction between altered drainage patterns and permafrost thaw.
As a first step, we use multi source remote sensing data (e.g. optical and SAR; Hexagon, Sentinel 1 & 2, Landsat and VHR imagery) and deep learning for a detailed landscape characterization and to map thaw slump evolution and melt pond expansion as well as their interconnection to the extensive network of thermo-erosion gullies. Furthermore, we quantify topographical and volumetric changes extracted from multitemporal ArcticDEM data. In a next step, we analyze spatial patterns and temporal changes and correlations to other environmental drivers (e.g. weather extremes, climatic changes, ecosystem, hydrological changes) with the goal to evaluate relevant additional processes (e.g. mechanical-erosion, drainage) and site or landform specific parametrization to be included in the permafrost model CryoGrid. Based on this, we evaluate different options to assimilate observed key landscape parameters such as slope characteristics, subsidence, topographic roughness and drainage patterns into the modelling process to derive (i) improved parametrizations and (ii) ground ice contents through inverse modelling. As a last step, different options will be assessed to predict future permafrost degradation including stochastic approaches to tackle the spatially and temporally abrupt initiation of permafrost degradation landforms. Finally, our conceptualization should allow being transferrable to other permafrost regions in order to improve process understanding and prediction of rapid permafrost degradation at larger scale.
Drained lake basins (DLBs) are common landforms in lowland permafrost regions in the Arctic. Drained lake basins (DLBs) are often the most common landforms in lowland permafrost regions in the Arctic (50% to 75% of the landscape). However, detailed assessments of DLBs including distribution, abundance and spatial variability across scales are limited. A recently published data set (Bergstedt et al., 2021) provides a Landsat-8 based statistical assessment DLB occurrence, focusing on the Alaska North Slope. In this study we focus on the added benefit of higher resolution satellite imagery, specifically the imagery available through Sentinel-1 and Sentinel-2. Higher resolution imagery allows for an in-depth assessment of possible uncertainties of the underlying Landsat-8 based DLB data product. The combination of Synthetic Aperture Radar (SAR, Sentinel-1) and multispectral imagery (Sentinel-2) allows us to take into account a range of surface cover characteristics. The Landsat-8 based classification provides an ‘ambiguous’ class, describing areas that could not confidently be classified being a DLB or not. For this we focus on selected areas in the Arctic, covering different cases of possible uncertainty. Possible uncertainties may be tied to mixed pixels at the edges of DLBs, other landforms, such as seasonally flooded connections between existing lakes, being misclassified as DLBs and gaps in the input data sets. A high-resolution analysis of the spatial distribution of DLBs in lowland permafrost regions is important for quantitative studies on landscape diversity, wildlife habitat, permafrost, hydrology, geotechnical conditions, and high-latitude carbon cycling. Specifically models and upscaling efforts concerning carbon cycling and gas fluxes require detailed information on landscape features and disturbance processes, some of which can be inferred from DLB mapping efforts. Therefore, an in-depth analysis of possible uncertainties is of high importance.
Landcover information does not only provide insight into above ground conditions such as vegetation communities, it is also of high value as proxy for sub-ground conditions. Such information is urgently needed at high spatial resolution and adequate thematic content for Arctic permafrost regions in order to parameterize models (permafrost models, ESM) and for studies with focus on climate change impact assessment.
A prototype for an Arctic landcover description has been developed in ESA DUE GlobPermafrost and has been derived for various sites across the Arctic for evaluation with focus on above ground conditions (vegetation communities). The retrieval is currently reassessed in the context of ESA Permafrost_cci (1) considering updated user requirements, broadening the potential range of applications and revisiting needs by ESM and flux upscaling approaches (also considering initiatives such as AMPAC), (2) taking better into account sub-ground conditions using soil in situ data, and (3) transferring the retrieval to a machine learning approach.
The developed scheme fuses Sentinel-1/2 data acquired since 2015. Results are further combined with a recently developed dataset on Arctic settlements and infrastructure (SACHI dataset from H2020 Nunataryuk) in order to differentiate natural from artificial barren area.
The status of the dataset development including first circumpolar assessment results will be presented.
Northern high latitudes are under rapid change as the climate is changing and warming. Methane (CH4) emissions from the high latitudes, especially from the Arctic and subarctic areas, involve open questions and considerable uncertainties as the common Arctic circumstances are changing due to warming. Globally, the main natural source of methane is wetlands; while the tropical wetlands contribute most of these emissions, high-latitude wetlands are associated with significant uncertainties, especially in future projections. High seasonal temperature variations and snow cover over frozen ground are common features for the high latitudes, and the high-latitude wetlands are partly located in the permafrost regions. The methane emissions from a specific high latitude wetland depend on the soil properties and conditions. Previous in-situ and ground-based studies have shown that frost and snow cover over frozen ground has both direct and indirect effects on the wetlands as a methane source.
We study the dependencies of the environmental drivers, for example frost and snow, and column-averaged methane at Northern high latitudes. We concentrate on the satellite observations, but we use, in addition, in-situ measurements and ground-based total column measurements to support the analysis. The column-averaged methane (XCH4) observations from the Greenhouse Gases Observing Satellite (GOSAT) and the Tropospheric Monitoring Instrument (TROPOMI) onboard Copernicus Sentinel-5 Precursor Satellite will be used as the main methane data sources. To detect the soil freezing, we use the soil freeze-thaw (F/T) product which applies observations from the European Space Agency’s (ESA) Soil Moisture and Ocean Salinity (SMOS) satellite. Snow extent and snow properties will be studied, for example, from IMS Daily Northern Hemisphere Snow and Ice Analysis from the United States National Ice Center (USNIC) and snow clearance day data. The ground-based total column CH4 will be obtained from high latitude Total Carbon Column Observing Network (TCCON) sites.
As a result, we will assess the connections of seasonal snow and frost to the seasonal cycle of XCH4 in a larger scale over the boreal and Arctic regions. These areas are scarcely and sporadically covered with in-situ measurements and therefore satellite observations expand the opportunity to study the remote areas that can play a significant role as a methane source to the atmosphere.
The Arctic and boreal regions are experiencing a rapid increase in temperature, resulting in a changing cryosphere, increasing human activity and potential increase in the high-latitude methane emissions. Sentinel 5P TROPOMI observations provide an unprecedented coverage of XCH4 in this region, compared to previous missions or in situ measurements. We present a systematic comparison of three TROPOMI methane products – operational and the scientific SRON and WFMD products – focusing exclusively on high latitudes above 50 degrees North. We evaluate the seasonal coverage over the continuous and discontinuous permafrost regions, reflecting the potential of TROPOMI to inform inversion models on changing emissions in these regions. We also evaluate biases by carrying out comparisons to high-latitude TCCON. Although the accuracy and precision of both products are good as compared to the TCCON, a persistent seasonal bias in TROPOMI XCH4 (high values in spring, low values in autumn) is found for all satellite products. We make an effort to distinguish and analyse the albedo effects from snow cover and the changes in the CH4 profile shape caused by high-altitude depletion of methane in the polar vortex. Comparisons to atmospheric profile measurements with AirCore carried out in Northern Finland support the analysis and help validate prior profiles used in the retrievals. We also present a comparison of inverse model results from Carbon Tracker CTE-CH4, which show that these seasonal biases may have a significant impact on the fluxes. Moreover, we directly compare regional patterns in XCH4 for all three TROPOMI products. We find that the differences in the regional comparisons can be larger than the differences found against ground-based references, which highlights the importance of the availability of several XCH4 products for a more reliable interpretation of spatial patterns, anomalous values and understanding of the potential origin of high-latitude biases, especially when the validation data are very limited.
Characterized as a crucial phenomenon of permafrost, the annual freeze-thaw cycle is visibly intensifying as global temperatures increase in the Arctic region. Importantly, melting permafrost potentially creates positive feedbacks for future climate warming, since frozen soil locks away more than twice as much carbon as currently exists in the atmosphere. Monitoring changes in the thickness of the uppermost layer of the permafrost which suffers seasonal melting - the active layer thickness - is recognized as indispensable for permafrost status assessment. Previous studies have identified that satellite differential interferometry SAR (DInSAR) has a competitive advantage on spatial coverage over in-situ observations. However, almost all DInSAR applications over permafrost regions suffer from poor coherence caused by complex dynamic ground processes. In this novel study, the performance of the three primary interferometric schemes were examined and compared – persistent scatterer (PS), short baseline (SBAS), and intermittent SBAS (ISBAS, now known as APSIS). They were implemented across a northern coastal region of Alaska which includes the town of Utqiagvik (Barrow) using 4-years of Sentinel-1 acquisitions 2017-2020 during intervals of seasonal thawing (typically May-September). The efficiency of the SBAS scheme has been presented in previous studies for permafrost monitoring at the temporal scale ranging from seasonal to decadal, unlike the PS or APSIS schemes. The potential causes of unwanted decorrelation (e.g. atmospheric delay, soil moisture content, precipitation) were scrutinized, as well as against different tundra landscapes in the study area. Clear seasonal spatial and temporal patterns of ground deformation associated with permafrost melt were seen in the DInSAR results, and which were consistent across the three schemes - and with GPS ground-station results. Although this study is the first application of the APSIS scheme for Arctic permafrost monitoring, the results show that APSIS provided the best performance in both accuracy and spatial coverage. Across the 3-year span 2017-2019 the average ground subsiding velocity was 1mm/year (2017-2019). However, inclusion of the fourth year (2017-2020) jumped this up to a remarkable 4 mm/year. We associate this latter result with rapid permafrost melt due to the extraordinarily high temperatures seen across the Arctic in 2020 (tied with 2016 for the warmest ever year).
The permafrost region contains about twice the amount of carbon as Earth’s atmosphere. Under the ongoing accelerated Arctic warming, permafrost is expected to increasingly thaw, leading to its decomposition and the release of greenhouse gases, and in particular methane, which has a global warming potential about 80 times that of CO2 over a 20-year period. This thawing of permafrost and release of greenhouse gases is exacerbated by the increasing frequency and intensity of wildfires at high latitudes. Coastal subsea permafrost regions as well as reservoirs of methane hydrates in the Arctic could also contribute to the release of additional methane into the atmosphere. The new EU Arctic policy released in October 2021 recognizes the urgency to improve our knowledge of these processes.
Our understanding of methane emissions from the Arctic has been very limited due to the sparsity of in-situ measurements, the lack of data from passive backscatter SWIR (Short-Wave InfraRed) instruments during the polar night and general retrieval issues for high solar zenith angles, frequent cloudiness, and difficulties to retrieve over dark surfaces like sea and snow. So far, no clear trends could have been established for the Arctic regions based on satellite measurements, and it is urgent to understand whether this is due to instrument limitations, and whether methane emissions will increase in the upcoming years. Specifically, our knowledge of emissions during the winter period needs to be improved.
In this context, the MEthane Lidar missioN (MERLIN), which has been proposed and co-funded by DLR and CNES, could be of interest. The mission is currently in its phase D, with a launch readiness foreseen for 2027. MERLIN will employ an IPDA (Integrated Path Differential Absorption) nadir-viewing Lidar instrument in a near-polar sun-synchronous orbit to actively measure the column-weighted dry-air mixing ratio of methane (XCH4). This mission will enable detection of methane in all seasons and latitudes during day and night, and it has a low sensitivity to thin cirrus clouds and aerosol scattering. We will present plans for investigating opportunities enabled by MERLIN and the foreseen airborne campaigns with the demonstrator CHARM-F onboard the HALO aircraft to improve total column methane retrieval over the Arctic and boreal regions, including issues over coastal areas, lakes, and snow, and the potential for measurements under broken cloud conditions. In the context of determining surface fluxes, the role of global and Arctic regional inverse modeling, measurements from ground stations, and SMOS freeze/thaw data for constraining the winter period fluxes will also be discussed.
ESA DUE Globpermafrost (2016-2018) and the ESA CCI+ Permafrost (2018-2021) focus on the processing of ready-to-use data products derived from remote sensing data that support permafrost-related research. Within the first funding period a wide range of GlobPermafrost remote sensing products were processed: Landsat multispectral index trends (Tasseled Cap Brightness, Greeness, Wetness; Normalized Vegetation Index NDVI), Arctic land cover (e.g., shrub height, vegetation composition), lake ice grounding, InSAR-based land surface deformation, rock glacier velocities. Additionally, spatially distributed permafrost model output with permafrost probability and ground temperature per pixel were developed. The focus on ESA DUE projects is to ensure that all data products processed meet user requirements. To make products visible we established WebGIS projects using WebGIS technology within maps@awi (http://maps.awi.de), a highly scalable data visualisation unit within AWI’s data workflow framework O2A (from Observation to Archive). GIS services have been created and designed using ArcGIS for Desktop (latest Version) and finally published as a Web Map Service (WMS), an internationally standardized format (Open Geospatial Consortium (OGC)), using ArcGIS for Server. The project-specific data WMS as well as a resolution-specific background map WMS are embedded into a GIS viewer application based on Leaflet, an open-source JavaScript library. Therefore, we developed project-specific visualisation of raster and vector data products adapted to the products’ specific spatial scales and resolutions. This resulted in an ‘Arctic’ WebGIS visualising circum-artic products, as well as small-scale regional WebGIS projects like ‘Alps’, ‘Andes’ or ‘Central Asia’ that visualize e.g. higher spatial resolution products like rock glacier movements. The GIS viewer application was adapted to interlink all GlobPermafrost WebGIS projects, and especially to enable their direct accessibility via the GlobPermafrost Overview WebGIS.
Beside remote sensing derived data products, the locations of the WMO GCOS ground-monitoring networks of the permafrost community, the Global Terrestrial Network for Permafrost GTN-P managed by the International Permafrost Association IPA were add as feature layer. All resulting ESA GlobPermafrost WebGIS presented on several User workshops and at conferences, were being continuously adapted in close interaction with the International Permafrost Association IPA. The GlobPermafrost data products are already DOI-registered and archived in the data archive PANGAEA provided by AWI.
Whithin the framework of the ESA CCI+ Permafrost project a newly WebGIS was added: a time-series webGIS comprising of the CCI+ Permafrost circum-arctic model output for Mean Annual Ground Temperature (MAGT), Permafrost Extentand Probability (PEX), and Active Layer Thickness (ALT), for a more than twenty years period of time. All data products are available in a yearly resolution, as well as calculated averages of MAGT, PEX and ALT of the time series. Fortunately, the new time-series WebGIS built on the already established time-series visualization capabilities developed in-house within the technical WebGIS-infrastructure maps@awi at AWI.
Climate change is severely affecting the Northern high latitudes with many drastic changes expected. Efficient monitoring systems to detect and quantify such changes are essential to assess their impact on future global climate trajectories. Two primary methods for monitoring atmospheric carbon are with in situ observations, e.g. atmospheric towers, and satellite remote sensing. In this study we perform a series of case studies to assess the capabilities of in situ towers and both passive and active space-based missions to detect deviation from currently observed emission patterns, particularly signals associated with expected disturbance processes.
This signal detection study follows a 3-step approach: a ground truth is generated by transporting known fluxes in a 4D atmospheric transport model, from which synthetic observations are generated and signal detection limits are computed. These observations as well as the baseline nature runs are produced using the Goddard Earth Observing System model (GEOS). To simulate tower measurements, time series are extracted for single grid cells, Gaussian noise is added to these synthetic observations within the measurement precision defined by the WMO, and a range of transport errors is tested. Two satellite measurement techniques are modelled: an active sensor using an integrated-path differential absorption lidar (based on the future DLR/CNES mission MERLIN), and a passive sensor using a wide-swath nadir-viewing imaging spectrometer (based on TROPOMI on S5P). Here, in addition to random errors related to measurement precision, biases due to seasonality, latitudinal gradient, albedo, aerosol load, surface pressure and topography are considered.
We examine two disturbance scenarios: In the first scenario we simulated enhanced methane release from expected Yedoma thaw. The second scenario models enhanced methane fluxes from Arctic Ocean shelf ebullition. We use a variety of signal detection metrics to differentiate between a baseline case and the disturbance scenario runs, including varying levels of signal amplification. We compare the ability of tower and satellite measurements to detect high latitude methane changes and find that despite having errors an order of magnitude higher than ground-based measurements, satellite measurements, especially MERLIN, have similar realistic detection limits while granting superior spatial and often temporal coverage.
Arctic permafrost lowlands are wetlands often characterized by overall high methane emissions during summer. However, at the local scale high methane emissions are usually linked to specific land cover (LC) classes while other classes have very low emissions or none at all. Therefore, a detailed characterization of the vegetation composition of such lowlands and their heterogeneous mosaic of LC types and associated methane fluxes is necessary to quantify overall landscape-scale methane fluxes. In addition, ongoing climate change in Arctic lowlands impacts the drying or wetting of classes and results in either gradual shifts between classes or abrupt changes due to disturbances such as shore expansion or lake drainage. These changes are expected to affect the methane budget of Arctic permafrost landscapes. A crucial uncertainty for future carbon cycle projections is the quantitative understanding of the magnitude and speed of such changes affecting the methane cycle of Arctic permafrost regions. We here describe a new approach for methane emission upscaling for the Arctic Lena Delta building on a remote sensing-based, dynamic LC classification taking into account gradual and abrupt LC changes for the period 2000-2020.
The Lena Delta (72.0–73.8° N, 122.0–129.5° E) is the largest Arctic river delta (~29,000 km2) and is underlain by continuous permafrost. The Lena Delta has been a focus area for German-Russian methane research from Arctic permafrost landscapes for the last 25 years and a wide range of observational methane datasets were collected. In prior research, a static LC classification based on a 30m-resolution multispectral Landsat-7 ETM+ image mosaic, composed of three summer images of July 2000 and 2001, provided a first insight in delta-wide LC classes and associated methane fluxes (Schneider et al., 2009). Since then, the rapid growth in remote sensing resources and processing capabilities as well as another decade of methane field data collection now opens the opportunity for an enhanced quantification of LC change and its effects on landscape-scale methane fluxes in the Lena Delta.
The lowland tundra landscapes of the delta are divided in three major geomorphological units: the first terrace comprises the modern and Holocene delta floodplains, the second terrace comprises a Pleistocene fluvial deposition area in the NW part of the delta, which is largely fluvially inactive, and the third terrace comprises of Pleistocene ice-rich Yedoma permafrost uplands and deeply incised thermokarst lakes and basins.
Our LC classification approach consists of two main steps: 1) development of a static LC classification using the rich land cover training data available for the central Lena Delta region, and 2) development of a dynamic multi-temporal LC classification based on the knowledge from the static LC map in combination with remote sensing time series data for the 2000 to 2020 period across the entire delta.
First, we performed a static LC classification for the central Lena Delta building an initial classification on training data of Elementary Sampling Units (ESUs) that included i) 30 x 30 m vegetation plots from field work in summer 2018 and ii) additional ESUs assigned from comprehensive field knowledge from numerous Russian-German expeditions to the central Lena Delta. This first robust LC classification was used to train the Google Earth Engine (GEE) aggregated 10m resolution Sentinel-2 satellite data from summer 2018. The LC classes were optimised to capture LC classes defined by landscape wetness and vegetation types with the goal of upscaling field-observed methane fluxes from these classes. LC classes are also linked to low and high disturbance regimes in different terraces and landscape settings in the Lena Delta, allowing a grouping into several main classes that can be associated with rates of carbon cycling. For example, classes with a high disturbance regime tend to experience higher carbon accumulation rates and faster cycling of above-ground carbon. In total, 13 classes were differentiated (11 vegetated classes, 1 water class, 1 barren sand class) for the static LC map.
Second, based on the static LC classification we further extended the classification scheme to a dynamic model using additional satellite data for 2000 to 2020 (Sentinel-2, and Landsat-5, -7 and -8) to assess and characterize dynamic LC changes at 30 m resolution in annual and 5-year periods (2000-2005, 2006-2010, 2011-2015, 2016-2020). A flexible GEE classification pipeline was used to allow for dynamic classification schemes. Multi-sensor composite medoid mosaics were derived from cloud-free summer (July-August) imagery for the different time periods under assessment. Input for the classification were the visible, near-, short-wave and mid-infrared multispectral bands, the maximum NDVI, and elevation data from the 2m resolution Arctic DEM. Training data was derived from the static 2018 LC classification and then a random forest classifier was applied for the classification.
In addition to these dynamic LC components, also providing 13 classes, we further applied stratification to distinguish selected classes important for methane emissions that were spectrally difficult to differentiate with the multispectral input data from Landsat and Sentinel-2. This includes (1) the differentiation of an additional class of wet polygonal tundra in drained thermokarst lake basins that was spectrally similar (but functionally different) to wet polygonal tundra on yedoma uplands of the third terrace using an ArcticDEM-based extraction of basin landforms, (2) the differentiation of lentic water bodies into lakes of different size categories building on the experience of the Boreal-Arctic Wetland and Lake Database (BAWLD)(Olefeldt et al., 2021), and (3) the differentiation of lotic water bodies into deep and shallow delta channels according to their water depth as detected by winter Sentinel-1 SAR data (Juhls et al., 2021).
Our 20-year time series indicates a partial reduction in wet polygonal LC classes in the Lena Delta, which is particularly visible in the central Lena Delta on the ground ice-rich yedoma upland surfaces of the third terrace. Here, the class “Polygonal tundra complexes with up to 50 % surface water” decreased in area (-4%) and shifted partially to the class “Polygonal tundra complexes with up to 20 % surface water” and “Polygonal tundra complexes with up to 10 % surface water”, suggesting enhanced drainage possibly associated with ice wedge degradation or drying due to general warming. Some other classes experienced minor decreases, such as “dwarf shrub-herb communities” representative for the dryer vegetation communities on the second terrace (-1%), while some other classes increased in area, such as “dry grass to wet sedge complex” (+1%) and “wet sedge complex” (+0.7%). Overall, LC trends between the four observation periods from 2000 to 2020 were rather subtle and continuous between neighboring classes. Abrupt changes were identified only at local scales, where for example the drainage of some larger lakes caused abrupt class shifts or where shore erosion of ice-rich bluffs along delta channels caused large but fairly gradual class shifts. In comparison to the overall LC dynamics and LC changes, these abrupt changes play only a minor role in the change of LC class areas so far in the Arctic Lena Delta.
Overall, the dynamic LC remote sensing approach provides a first continuous 20-year observation of LC classes and their shifts in an Arctic delta and proves valuable for assessing LC changes. Attribution of methane observational data to individual classes and a quantification of changes in methane fluxes is work in progress and will be presented at the time of the conference.
References:
Schneider J, Grosse G, Wagner D (2009): Land cover classification of tundra environments in the Arctic Lena Delta based on Landsat 7 ETM+ data and its application for upscaling of methane emissions. Remote Sensing of Environment, 113: 380-391. doi: 10.1016/j.rse.2008.10.013.
Juhls, B., Antonova, S., Angelopoulos, M., Bobrov, N., Langer, M., Maksimov, G., ... & Overduin, P. P. (2021). Serpentine (floating) ice channels and their interaction with riverbed permafrost in the Lena River Delta, Russia. Frontiers in Earth Science, 9,.https://doi.org/10.3389/feart.2021.689941
Olefeldt, D., Hovemyr, M., Kuhn, M. A., Bastviken, D., Bohn, T. J., Connolly, J., Crill, P., Euskirchen, E. S., Finkelstein, S. A., Genet, H., Grosse, G., Harris, L. I., Heffernan, L., Helbig, M., Hugelius, G., Hutchins, R., Juutinen, S., Lara, M. J., Malhotra, A., Manies, K., McGuire, A. D., Natali, S. M., O'Donnell, J. A., Parmentier, F.-J. W., Räsänen, A., Schädel, C., Sonnentag, O., Strack, M., Tank, S. E., Treat, C., Varner, R. K., Virtanen, T., Warren, R. K., and Watts, J. D.: The Boreal–Arctic Wetland and Lake Dataset (BAWLD), Earth Syst. Sci. Data, 13, 5127–5149, https://doi.org/10.5194/essd-13-5127-2021, 2021.
The Spanish National Institute of Aerospace Technology (INTA) acquired in 2019 the high-resolution Chlorophyll Fluorescence sensor (CFL) to be part of the European scientific community involved in the retrieval of the solar-induced chlorophyll fluorescence (SIF) using remote sensing techniques. The INTA’s Airborne Hyperspectral System, which has actively participated in airborne hyperspectral campaigns since 1995 with the already existing Airborne Hyperspectral Scanner (AHS) and Compact Airborne Spectrographic Imager (CASI 1500i), has been notably improved with the incorporation of the CFL.
The CFL is one of the newest hyperspectral sensors of the HYPERSPEC® family from Headwall Photonics Inc. It is a pushbroom sensor with an angular field of view of 23.5° and 1600 across-track spatial pixels. CFL collects image data across the SIF emission spectrum from 670 nm to 780 nm. The spectral design takes a very narrow passband with up to 2160 pixels to ensure a spectral resolution under 0.2 nm of FWHM. The spatial and spectral binning possibilities can be up to a reduction of 4 for the number of pixels.
The radiometric and spectral characterization as well as the calibration of the CFL sensor are periodically performed at INTA’s facilities. The CFL processing chain, which is mainly developed in R programming language, is being continuously updated to generate L1 (georeferenced at-sensor radiance) and L2 products (georeferenced ground reflectance and top of canopy fluorescence). Imagery orthorectification and atmospheric correction are performed by an in house set of toolboxes based on Applanix data on board the INTA’s aircraft and the libRadtran radiative transfer code respectively. The spectral fitting method is used to for the retrieval of SIF.
With the acquisition of the CFL, the INTA’s Airborne Hyperspectral System is now suitable for projects related to the SIF retrieval and the upcoming ESA Earth Explorer FLEX mission. The capability of the system has been demonstrated by participating in the L3 and L4 advanced products for the FLEX-S3 mission project (FLEXL3L4 project, Spanish State Plan for Scientific Research and Innovation), where INTA leads the calibration and validation (CalVal) part of this project. In the framework of the FLEXL3L4 project, a ground-based and airborne campaign was carried out in the experimental agricultural site of Las Tiesas, Barrax, Spain. Multiscale comprehensive observations of radiometric (mainly fluorescence) and biophysical parameters of several crops along Las Tiesas Experimental site were acquired in two hyperspectral flight campaigns. Additionally, during the airborne overpass simultaneous top of canopy (FLOX, Piccolo system, ASD) and leaf level (Fluowat) measurements were performed over different land use types.
The geometric and radiometric performance of the CFL L1 and L2 products has been evaluated for the first time using the in-situ measurements from the described field campaign. Furthermore, a first capability assessment of the CFL sensor for FLEX CalVal is reported.
Solar-induced fluorescence (SIF) is known to correlate with gross primary productivity (GPP) (Frankenberg et al., 2011; Guanter et al., 2012; Sun et al., 2018) Although this correlation is not linear (Dechant et al., 2020), it might be used to enhance the accuracy of global carbon cycle assessments and thus to improve currently available dynamic vegetation models. Remotely-sensed SIF has been used to assess temporal dynamics of photosynthesis across different biomes (Köhler et al., 2018; Magney et al., 2020; Walther et al., 2016), but its application is especially useful for evergreen-dominated ecosystems. In ecosystems like Boreal or Mediterranean forests, the applicability of conventional reflectance-based indices is strongly limited (Garbulsky et al., 2011) due prevalence of evergreen vegetation (Magney et al., 2019). Although SIF carries a potential to enhance our capacity to follow photosynthetic dynamics of evergreen forests and has been widely implemented to do so, interpretation of SIF in terms of gross GPP remains challenging. That is because of insufficient knowledge of what mechanisms (and how) underlie the spatial and temporal variation of SIF.
Understanding of how biochemical, morphological, structural, and photosynthetic factors affect the SIF – photosynthetic dynamics relationship is essential to interpret SIF in terms of GPP (Porcar-Castell et al., 2021). However, because these factors vary in space and time, investigating their effect on SIF is difficult. Fortunately, this knowledge gap can be conveniently addressed at leaf-level. Leaf-level enables retrieving a full-range, continuous chlorophyll fluorescence (ChlF) spectrum of (approx.) 650–850 nm (Lichtenthaler & Rinderle, 1988), in contrast to narrow Fraunhofer absorption bands in which SIF is retrieved (Meroni et al., 2009). Consequently, an effect of factors such as chlorophyll content (Buschmann, 2007), which influence not only ChlF level but also ChlF spectral shape, can be investigated. Moreover, the ChlF – photosynthesis relationship is easier to interpret at leaf-level, because it is not complicated by structural factors, like the canopy architecture (Kim et al., 2021). Consequently, interpretation of SIF in terms of photosynthetic dynamics at larger scales depends on our understanding of how various factors affect the ChlF – photosynthesis relationship at leaf-level (Magney et al., 2020; Raczka et al., 2019).
We investigated how biochemical, morphological, and photosynthetic factors affect leaf-level ChlF: its magnitude and spectral shape, across leaves of different species and in response to the different growing light environments. We measured leaf-level chlorophyll fluorescence at full-range spectrum, simultaneously with chlorophyll and carotenoids content, specific leaf area, or photochemical and non-photochemical quenching, for unstressed leaves of 20 species characteristic to Boreal and Mediterranean ecosystems. Data were acquired during three measuring campaigns in Finland (boreal forest in 2017, Helsinki city in 2019), and in Spain (2018). Importantly, the majority of species were sampled from two canopy heights, representing contrasting light environments. The location-specific light environments were estimated using Digital Hemispherical photography (Hemisfer®, WLS, Birmensdorf, Switzerland; Rajewicz et al., 2022, in review).
Results of our study imply that the relationship between ChlF and photosynthesis is affected by biochemical, morphological, and physiological factors that vary between species, light environments, and biomes. Interestingly, these factors might be dependent on the light environment in a different manner or to a different extent when comparing Boreal and Mediterranean ecosystems. Therefore, we suggest that background information on biochemical and morphological differences between leaves within a single ecosystem, or between ecosystems, might enhance interpretation of ChlF in terms of photosynthesis. That enhancement has, in turn, a potential to support the more accurate interpretation of SIF in terms of GPP dynamics and thus might have important implications from the perspective of current and future carbon cycle studies.
References:
Buschmann, C. (2007). Variability and application of the chlorophyll fluorescence emission ratio red/far-red of leaves. Photosynthesis Research, 92(2), 261–271.
Dechant, B., Ryu, Y., Badgley, G., Zeng, Y., Berry, J. A., Zhang, Y., Goulas, Y., Li, Z., Zhang, Q., & Kang, M. (2020). Canopy structure explains the relationship between photosynthesis and sun-induced chlorophyll fluorescence in crops. Remote Sensing of Environment, 241(Journal Article), 111733.
Frankenberg, C., Fisher, J. B., Worden, J., Badgley, G., Saatchi, S. S., Lee, J., Toon, G. C., Butz, A., Jung, M., & Kuze, A. (2011). New global observations of the terrestrial carbon cycle from GOSAT: Patterns of plant fluorescence with gross primary productivity. Geophysical Research Letters, 38(17).
Garbulsky, M. F., Peñuelas, J., Gamon, J., Inoue, Y., & Filella, I. (2011). The photochemical reflectance index (PRI) and the remote sensing of leaf, canopy and ecosystem radiation use efficiencies: A review and meta-analysis. Remote Sensing of Environment, 115(2), 281–297.
Guanter, L., Frankenberg, C., Dudhia, A., Lewis, P. E., Gómez-Dans, J., Kuze, A., Suto, H., & Grainger, R. G. (2012). Retrieval and global assessment of terrestrial chlorophyll fluorescence from GOSAT space measurements. Remote Sensing of Environment, 121(Journal Article), 236–251.
Köhler, P., Frankenberg, C., Magney, T. S., Guanter, L., Joiner, J., & Landgraf, J. (2018). Global retrievals of solar-induced chlorophyll fluorescence with TROPOMI: First results and intersensor comparison to OCO-2. Geophysical Research Letters, 45(19), 10,456-10,463.
Kim, J., Ryu, Y., Dechant, B., Lee, H., Kim, H. S., Kornfeld, A., & Berry, J. A. (2021). Solar-induced chlorophyll fluorescence is non-linearly related to canopy photosynthesis in a temperate evergreen needleleaf forest during the fall transition. Remote Sensing of Environment, 258(Journal Article), 112362.
Lichtenthaler, H. K., & Rinderle, U. (1988). The role of chlorophyll fluorescence in the detection of stress conditions in plants. CRC Critical Reviews in Analytical Chemistry, 19(sup1), S29–S85.
Magney, T. S., Barnes, M. L., & Yang, X. (2020). On the covariation of chlorophyll fluorescence and photosynthesis across scales. Geophysical Research Letters, 47(23), e2020GL091098.
Magney, T. S., Bowling, D. R., Logan, B. A., Grossmann, K., Stutz, J., Blanken, P. D., Burns, S. P., Cheng, R., Garcia, M. A., & Kӧhler, P. (2019). Mechanistic evidence for tracking the seasonality of photosynthesis with solar-induced fluorescence. Proceedings of the National Academy of Sciences, 116(24), 11640–11645.
Meroni, M., Rossini, M., Guanter, L., Alonso, L., Rascher, U., Colombo, R., & Moreno, J. (2009). Remote sensing of solar-induced chlorophyll fluorescence: Review of methods and applications. Remote Sensing of Environment, 113(10), 2037–2051.
Porcar-Castell, A., Malenovský, Z., Magney, T., Van Wittenberghe, S., Fernández-Marín, B., Maignan, F., Zhang, Y., Maseyk, K., Atherton, J., & Albert, L. P. (2021). Chlorophyll a fluorescence illuminates a path connecting plant molecular biology to Earth-system science. Nature Plants, 7(8), 998–1009.
Raczka, B., Porcar-Castell, A., Magney, T., Lee, J. E., Köhler, P., Frankenberg, C., Grossmann, K., Logan, B. A., Stutz, J., & Blanken, P. D. (2019). Sustained nonphotochemical quenching shapes the seasonal pattern of solar-induced fluorescence at a high-elevation evergreen forest. Journal of Geophysical Research: Biogeosciences, 124(7), 2005–2020.
Sun, Y., Frankenberg, C., Jung, M., Joiner, J., Guanter, L., Köhler, P., & Magney, T. (2018). Overview of Solar-Induced chlorophyll Fluorescence (SIF) from the Orbiting Carbon Observatory-2: Retrieval, cross-mission comparison, and global monitoring for GPP. Remote Sensing of Environment, 209(Journal Article), 808–823.
Walther, S., Voigt, M., Thum, T., Gonsamo, A., Zhang, Y., Köhler, P., Jung, M., Varlagin, A., & Guanter, L. (2016). Satellite chlorophyll fluorescence measurements reveal large‐scale decoupling of photosynthesis and greenness dynamics in boreal evergreen forests. Global Change Biology, 22(9), 2979–2996.
All life on earth depends on the availability of water. Climate change and wasting customs threat to limit its access to a large part of the population. Inefficient water management systems makes agriculture one of the activities that contribute most to such an alarming situation. Thus, the need of new ideas for a better efficiency in the use of water constantly grows, which implies the use of remote sensing (RS) techniques to cover large areas. Reflectance-based RS products, such as vegetation indices, have shown low sensitivity to detect the effects of water limitation on vegetation before the stress has impacted canopy structural properties. Thermal information is more closely related to water stress in plants, but is also affected by other factors not related to soil water limitations, e.g. wind speed and humidity. Recently, the use of sun-induced chlorophyll fluorescence (SIF) for water stress assessments has gained interest, since it is directly related to the photosynthetic activity that dynamically responds to limitations in the availability of water. Nevertheless, it is not clear yet how the spatial relation between SIF and soil water content behaves according to specific vegetation and soil characteristics. Therefore, in the present study we analyzed the link between airborne-SIF and geophysics-based plant available water (PAW) in the root zone of three crops (winter wheat, summer non-irrigated sugar beet and irrigated potato) during three growing seasons (2018, 2019 and 2020). We found a strong positive correlation (r = 0.92; p < 0.01) when water was a limiting factor, i.e., in the non-irrigated summer crop (sugar beet). The relation disappeared when the level of PAW is sufficient to meet the crops water need, i.e. in irrigated crops or years with precipitation events (25 l m-2) accumulated a few days before data acquisition. An unclear pattern in the relation of winter wheat and PAW might be explained to the advanced growth stage of winter wheat (ripening), when variations on SIF might be influenced by other physiological processes like chlorophyll degradation rather than the PAW in the root zone. Moreover, an expected response of SIF to a low PAW zone in the spatial and the temporal domains compared with the enhanced vegetation index (EVI) and the surface temperature, respectively, is reported in our study for the first time. The presented results contribute to the development of new methodologies for a better efficiency in the use of water by providing new insights on the role of SIF for real-time assessment of crop water stress. Besides, the current availability of global SIF and soil moisture satellite datasets such as the TROPOspheric Monitoring Instrument (TROPOMI)-SIF and the Soil Moisture Active/Passive (SMAP) products, respectively, enables further analysis to improve our understanding of the SIF-soil water content relation on larger scales. A brief insight on this relation will be presented on the example of the European heat wave in summer 2018. For this event the relationship between SIF and soil moisture for forests was characterized by high soil water content and low SIF values while crop lands showed an opposite trend.
Solar-induced chlorophyll fluorescence (SIF) is an optical signal that can track plant functional status under natural illumination conditions. Because SIF competes with photochemical and non-photochemical energy dissipation processes, it can reflect the dynamic regulation of photosynthesis in the field. SIF retrieval can be achieved benefiting from the development of high spectral resolution spectrometers and the use of solar or telluric atmospheric absorption features.
Although SIF has been measured from leaf to landscape scale using a variety of instruments and platforms (i.e. towers, drones, aircrafts, and satellites), attempts of employing these data for the fluorescence scaling from leaf to canopy are still under investigation. The fluorescence emitted at leaf level differs from the at sensor fluorescence due to the atmospheric and canopy effects. Furthermore, the reliable retrieval of SIF is also challenging as the SIF signal is mixed with reflected radiance from plants and only contributes 0.5 - 5% to apparent reflectance. Thus, validating and evaluating the quality of SIF and connecting SIF across scales is necessary, especially for satellite products e.g. from the planned Fluorescence Explorer (FLEX) mission.
In the validation process, understanding the propagation of SIF from leaves to top-of-canopy is one of the most important steps. In an attempt to close this gap, we developed the HyScreen, a ground-based line-scan hyperspectral imaging system, to measure SIF and vegetation indices at canopy scale with high spatial resolution, reaching 1-1.5 mm when placed 1 meter above canopy and allowing to differentiate between individual leaves, which brings a unique opportunity to characterize the vegetation structure, for instance to discriminate between shaded and sunlit leaves.
HyScreen consists of two sensors, the FLUO module (FWHM of 0.36 - 0.41 nm) to measure SIF and the VNIR module (FWHM 2.4 - 4.4 nm) to calculate reflectance as well as vegetation indices. Regarding the HyScreen data processing, an in-house processing chain was developed to retrieve SIF (in the O2A and O2B bands) as well as vegetation indices. It includes the radiometric and spectral characterization of both FLUO and VNIR modules as well as the determination of top-of-canopy upwelling and downwelling radiance.
In this study, to evaluate the performance of the HyScreen system, fluorescent (banana leaf and weeping fig leaves) and non-fluorescent targets (soil, peat and reference panels) were measured under clear sky conditions. Additionally, two genotypes of soybean plants with different chlorophyll content were measured to investigate the system performance when retrieving fluorescence from a complex structure. Non-fluorescent targets showed fluorescence values close to zero while SIF of sunlit vegetation targets ranged from 1.96 - 4.62 mWm⁻²nm⁻¹sr⁻¹ at O₂A and 1.13 - 4.24 mWm⁻²nm⁻¹sr⁻¹ at O₂B. Additionally, the retrieved fluorescence of the soybean variety characterized by lower chlorophyll content (‘Minngold’ with 1.19 and 1.42 mWm⁻²nm⁻¹sr⁻¹ at O₂A and O₂B ) provided higher SIF values than the darker variety (‘Eiko’ with 0.49 and 0.76 mWm⁻²nm⁻¹sr⁻¹ at O₂A and O₂B ). Furthermore, for both soybean varieties, SIF values of sunlit leaves were higher compared to shaded leaves, Minngold with 0.21 and 1.40 mWm⁻²nm⁻¹sr⁻¹ difference at O₂A and O₂B respectively; and Eiko with 0.21 and 1.71 mWm⁻²nm⁻¹sr⁻¹ difference at O₂A and O₂B respectively. At the same time, soybeans with different chlorophyll content have different SIF ratios (O₂A SIF divided by O₂B SIF) (0.83 and 0.65 for ‘Minngold’ and ‘Eiko’, respectively).
In conclusion, the HyScreen system is a proximal measurement system, which allows to measure SIF and vegetation indices at a small distance above the canopy. It is capable of capturing spatial heterogeneity and structural parameters of a single plant. Therefore, HyScreen retrieved SIF and Vegetation indices can be used to investigate the influence of canopy structure on canopy level SIF measurements. Moreover, these data have the potential to be used as ground validation data for larger scale SIF products recorded by drones or aircrafts.
Agriculture has to guarantee food security for a constantly growing population by increasing crop productivity with minimized environmental impact. Remote Sensing (RS) for large scale vegetation assessment is one of the most important tools to overcome this challenge. For years the implementation of RS techniques for crop assessment has been mainly based on the use of reflectance-based information, e.g. Vegetation Indices (VIs), which indicate crop stress after its effect has impacted plant structural properties. It is suggested that the use of Sun-induced Chlorophyll Fluorescence (SIF) possibly allows earlier crop stress detection, since being in direct relation with photosynthetic activity, thus, making it possible to detect smooth (pre-visual) changes in the functioning of vegetation. RS of SIF has gained interest of researchers thanks to the recent development of algorithms and models to compute SIF from airborne and satellite sensors. The FLuorescence EXplorer (FLEX) satellite mission of the European Space Agency (ESA) will provide SIF data at global scale with a spatial resolution of 300 m. Despite the great value of such data to track large-scale vegetation functional dynamics, there is high interest to study possible ways to increase its resolution to an intra- or inter-field level. Recent studies have addressed that subject using VIs, evapotranspiration and land surface temperature as explanatory variables. Yet, a more flexible method capable to work in multiple ecosystems and spatiotemporal scales is needed. Our hypothesis is that the versatility of the fractal geometry, present in numerous spatial and temporal phenomena in nature, allows fractal approaches to address that need. With this study, we aim to first evaluate the existence of fractal geometry in the spatial distribution of SIF emitting objects based on the presence of the universal Power Law (PL) and, second, to evaluate whether the aggregation of the SIF signal in SIF emitting objects across spatial resolutions is scale invariant. For that purpose we used airborne SIF data retrieved over a ~60 ha soybean field in Nebraska, USA (summer 2018). The image was resampled from its original resolution of 1.5 m to 5, 10 and 15 m pixel size. The resampled images were segmented into individual objects, and for each object the total SIF (SIFTOT) was calculated. We found: (i) presence of fractal geometry in the distribution of SIFTOT objects, since they followed the PL in all the analyzed scales; and (ii) evidence of scale invariance in the SIF aggregated signal. The second was concluded based on the linear increase of the scale factor and the nearly invariant behavior of the dimension factor of the PL equations across spatial resolutions. Both findings constitute the first step towards the use of the fractal geometry for SIF-downscaling, understood as the fragmentation of coarse resolution SIF data into the SIFTOT of individual vegetation objects under its footprint. The above described study was accepted for publication as the ‘fractal geometry’ chapter in the Springer-Nature Encyclopedia of Mathematical Geosciences, and it was ‘in production’ status by the time of this abstract’s submission. Additionally, we investigated possible bi-variate PL’s where a second variable could explain variations in SIFTOT. Interestingly, we found in numerous datasets that the inverse of the (SIF emitting) object size fits the PL function with SIFTOT at R2 > 0.95. This finding opens the possibility for practical SIF-downscaling approaches using the fractal theory.
As a fundamental life process, Photosynthesis plays a crucial role not only for food security, but also in water, energy and carbon exchanges between the land and the atmosphere. Due to its direct link to the photosynthetic light reactions, sun-induced fluorescence is often proposed as one of the most promising remote sensing signals to monitor photosynthesis in space in time.
However, uncertainties remain about its ability to capture the downregulation of photosynthesis under drought or temperature stress conditions. These uncertainties are mainly related to co-occurring morphological (e.g. leaf angle, leaf folding) and phenological (e.g. change in leaf pigments) changes which affect the optical signal received by the sensor. While fluorescence in the far-red spectra (F760) is mainly affected by scattering effects, fluorescence in the red spectra (F687) is affected by reabsorption effects. For the differentiation of morphological/phenological and physiological effects it is therefore essential to understand these processes and their influence on red and far-red fluorescence under stressed conditions.
We will present results of an mesocosm water manipulation experiment conducted before and during the first heat wave of 2019 (June to July) in Antwerp, Belgium. In five out of 15 mesocosm a drought was induced to Solanum tuberosum (potatoes) plants. Under clear sky conditions, we conducted nearly simultaneous measurements of canopy and leaf F687 and F760 with the Hyper spectrometer FLOX (JB-Hyperspectral Devices GmbH, Düsseldorf, Germany) and the leaf Clip FLUOWAT. We analysed the relationship of leaf and canopy measurements of F687 and F760 as well as red and far-red fluorescence yields (FY687 and FY760 respectively) under increasing drought and heat stress. By rotating the mesocosm in 90° steps, we simulated a change in the solar incident angle and analysed these effects on F687 and F760 and FY687 and FY760.
Our measurements show a positive relationship between leaf and canopy values of F687 and F760, as to be expected. However, when normalizing these values by APAR to derive fluorescence yields, the relationship between leaf and canopy measurements only holds for FY687. We discuss the effect of changing solar incidence angle, explore possible explanations for the poor relationship between leaf and canopy FY760 and analyse the capability of existing correction methods to address the possible scattering effect of F760 and FY760.
The Earth system science and Earth Observation communities are showing great interest in estimating SIF retrieval from space. Consequently, ESA has approved the FLEX (an Earth Explorer to observe vegetation fluorescence) mission, to be launched into space in 2024. FLEX it is the first mission concept specifically dedicated to monitor the ‘respiration’ of terrestrial vegetation. However, these space-based observations need to be validated and vegetation fluorescence better understood to be of societal benefit. This understanding requires measurement of SIF to be made near to the ground measurements and in the temporal and spatial domains. This will require that measurement from multiple instruments and multiple platforms be directly compared and analysed. Due to the extremely small SIF signal being considered, all instrument systems being used will require very accurate calibration and characterization to minimise uncertainties associated with each, as well as robust, accurate and replicable validation of calibration in the field after instrument deployment.
A number of approaches have been proposed to validate laboratory radiometric calibration in the field, but have a number of disadvantages: some are ‘open path’ or uncooled and needed to be assembled and disassembled for transport which may have led to high levels of uncertainty, are not practical for field use (difficult manipulation and without independent power supply). Some other cannot be used for validation of spectroradiometers from different platforms such as flux towers or on UAVs during field campaigns, or are suitable only for specific fore optic designs. Furthermore, none of these systems is designed to validate both radiance and irradiance calibration and is critical for modern dual-field-of-view spectrometers.
Here, we will provide a brief overview of a newly designed portable in-field calibration validation (cVal) system. The system is configured with a radiometric validation module (cValRad) with thermal control assembly, a spectral validation module (cValSpec) providing uniform emissivity from multiple spectral calibration lamps, portable power bank source that provide the system powering for at least 8 hours (less that the time requested for a validation test in the field) and control, monitoring and acquisition system. Since validation is related to the degree of reproducibility in instrument response, all components included in the validation system have been characterised, calibrated and validated in laboratory using high accuracy spectral and radiometric laboratory standards from CETAL, and their capabilities presented here. Thus, the validation system developed here can provide significant value on the validation of tested spectrometer systems used in field.
Uncertainties related to the FLEX Earth Explorer space observations, which will measure the canopy solar-induced chlorophyll fluorescence (SIF) of various vegetation types, can be assessed not only through the field and airborne validation activities but also through a dedicated computer modelling using modern, physically based radiative transfer models (RTMs). RTMs are highly efficient in evaluating the SIF confounding factors that cannot be directly measured in the field (e.g., impacts of forest woody components) and in revealing their importance in spatial three-dimensional (3D) as well as temporal (diurnal to seasonal) contexts. In this work, we used 3D Discrete Anisotropic Radiative Transfer (DART) model, to analyze canopy structural impacts of the three morphologically contrasting forest types, specifically European beech (Fagus sylvatica), white peppermint (Eucalyptus pulchella) and Norway spruce (Picea abies) stands, on their top-of-canopy (TOC) SIF emissions. While the beech canopy was tall (height of c. 25 m), broadleaf and characterized by the planophile leaf angle distribution (LAD), the peppermint and spruce canopies were middle-sized (height of c. 15 m), narrow-/needle-leaf, with the erectophile and spherical LADs, respectively. 3D DART representations of the stands were created from terrestrial laser scans (TLS) of individual trees of respective species. Each stand had a canopy cover of around 80% and was simulated for three leaf area index (LAI) classes: low (4-5), medium (7-8), and high (10-11). To ensure full comparability of the modelled results, all forest scenarios shared the same field-measured wood/bark and ground optical properties, the same local-noon solar zenith and azimuth angles, and the same atmospheric composition. Leaf optical properties (including SIF emissions) were simulated with the Fluspect-Cx RTM for the constant fluorescence quantum efficiency (fqe) of 0.02305. DART was set to produce the TOC red (686 nm) and far-red (740 nm) SIF signals (bandwidth of 0.0013 nm) together with 3D SIF radiative budgets (RB) of the two SIF bands, allowing for spatial quantifications of the SIF balance (emitted – absorbed SIF) and the omnidirectional SIF escape factor (SIF balance/emitted SIF) within individual 20 cm thick vertical canopy layers. 3D RB was simulated also for a broad spectral band between 400 and 750 nm, used to calculate the fraction of photosynthetically active radiation absorbed by green canopy foliar elements (fAPARgreen) for canopy vertical profiles.
Results revealed that the red SIF of all three species and LAI settings was strongly driven by LAD functions. Erectophile foliage of the peppermint canopies allowed for a higher red SIF scattering and reabsorption, resulting in the lowest red TOC SIF signal. The narrow needle-leaf shape and shoot structure of spruce foliage caused the lowest TOC far-red SIF values across all species and LAI categories. Virtual removal of woody elements (trunks, branches, and twigs) from the DART simulations enabled us to compute the impact of wood shadowing on fAPARgreen, and the wood interactions/obstructions of both red and far-red SIF photons. The largest wood-triggered fAPARgreen decrease was found for spruce stands (45-55%), whereas the decreases in beech and peppermint canopies were much less prominent (10-25%). Similarly, significant wood obstructions, computed as a relative difference between nadir TOC SIF escape factors from canopies with and without wooden parts, appeared for far-red SIF of spruce stands (SIF decrease by 35-45%). A smaller SIF reducing impact (5-25%) quantified for beech and peppermint stands suggests that wood structures introduce more potential uncertainty into far-red SIF TOC observations for coniferous than for broadleaf trees. Interestingly, we found that wood elements of the two broadleaf species did not obstruct but boosted the TOC red SIF signal by 1-3%. Further examination of the 3D DART SIF balance profiles indicated that this SIF increasing wood/bark effect took place in the top 20% of investigated broadleaf canopies. In addition, we found that SIF is escaping predominantly from the top 50% of all simulated forest stands, with the relative omnidirectional escape factor increasing from 0.1 to 0.5 with the increasing forest height. These results suggest that the forest ground Cal/Val undertakings should focus on the upper halves of monitored canopies. Nevertheless, some local exceptions may occur. For instance, contributions of lower vertical layers up to 0.1 W.mU+207B2.nmU+207B1 were noted when modelling red SIF of beech canopies.
Our results demonstrate that the state-of-the-art radiative transfer modelling is ready to be included in the future FLEX mission Cal/Val activities next to the field and air-/space-borne measurements. The inclusion of RTMs’ inputs as variables of interest would allow us to use RTMs as efficient tools revealing potential uncertainties of FLEX SIF products, especially when not measurable experimentally.
Passive microwave sensors have long been invaluable for atmospheric sounding due to their ability to penetrate clouds by comparison to infrared (IR) sounders whose coverage is limited to clear atmospheres. However, unlike IR technology that already provides hyperspectral sensors for widespread use in atmospheric observations, the majority of existing microwave satellites utilize only a small number of channels, thus limiting the amount of information that can potentially be retrieved about the atmospheric column. This ESA-funded High Spectral Resolution Airborne Microwave Sounder (HiSRAMS) project explores the advantages of novel hyperspectral capabilities in the microwave region, with the goal of demonstrating improvements in retrieval accuracy of temperature and humidity profiles and evaluating the technology potential for deployment in future satellite missions. Hyperspectral microwave measurements have the potential of improving the accuracy of NWP models as well as spectroscopic parameterizations of microwave absorption models.
HiSRAMS is a first-of-a-kind system developed by Omnisys Instruments in collaboration with the National Research Council Canada (NRC) and McGill University. The sounder, capable of measuring horizontally and vertically polarized radiances in the 60 GHz oxygen and 183 GHz water vapour bands at 305 kHz native resolution, exploits polyphase FFT filter bank technology. The system also possesses cross-track scanning capability within a 12 degree range around nadir and zenith, and allows great flexibility in measurement mode selection, including choice of polarization, frequency range, and scanning regime.
This compact airborne prototype has undergone initial flight tests onboard the NRC Convair-580, a research aircraft carrying a suite of atmospheric probes for complementary in-situ and remote sensing measurements. For simplicity, the data collection focussed primarily on sampling in clear air conditions and over lake surfaces in North America. In this presentation, we provide an overview of HiSRAMS specifications and show results of first airborne radiation closure tests against synthetic brightness temperature spectra simulated using in-situ pressure, temperature and humidity data.
Snow and ice properties control physical and biological processes on polar ice sheets and mountainous glaciers, which strongly affect the net solar radiation that regulates melt processes and the associated impacts on sea level rise. The amount of solar radiation absorbed by the surface increases when ice gets darker, which is mainly caused by liquid water and small light-absorbing particles (LAP) such as algae, soot, and dust accumulating on the surface and reducing its brightness. Thus, a quantitative mapping of snow and ice properties on a global scale is of particular importance as it provides a valuable input to climate models and helps to understand underlying processes. A new generation of orbital imaging spectrometers provides the technical prerequisites to achieve this objective and will deliver high-resolution data both on a global scale and daily basis, which requests for independently applicable retrieval algorithms. We present a novel method to retrieve grain size, liquid water content, and LAP mass mixing ratio from spaceborne imaging spectroscopy acquisitions. The methodology relies on accurate simulations of both scattering and absorptive properties of snow and ice and uses a joint retrieval of atmosphere and surface components based on optimal estimation (OE). This inversion technique leverages prior knowledge obtained from simulations by a snow and ice radiative transfer model and enables a rigorous quantification of retrieval uncertainties and posterior error correlation. For this purpose, we exploit statistical relationships between surface reflectance spectra and snow and ice properties to estimate their most probable quantities given the reflectance. To test this new algorithm, we conduct a sensitivity analysis based on simulated top-of-atmosphere radiance spectra using the upcoming EnMAP orbital imaging spectroscopy mission, demonstrating an accurate estimation performance of snow and ice surface properties. A validation experiment using in-situ measurements of glacier algae mass mixing ratio and surface reflectance from the Greenland Ice Sheet gives uncertainties of ±16.4 μg/g(ice) and less than 3%, respectively. Finally, we evaluate the potential of the presented algorithm for a robust global product that maps snow and ice surface properties corrected for latitudinal and topographic biases including a rigorous quantification of uncertainties.
Analysis-Ready Data (ARD) from Hyperspectral Sensors—The Design of the EnMAP L2A Land Product
Martin Bachmann, Kevin Alonso, Emiliano Carmona, Birgit Gerasch, Martin Habermeyer, Stefanie Holzwarth, Harald Krawczyk, Maximilian Langheinrich, David Marshall, Miguel Pato, Nicole Pinnel, Raquel de los Reyes, Mathias Schneider, Peter Schwind and Tobias Storch
German Aerospace Center (DLR), Earth Observation Center (EOC), Germany
Corresponding author: martin.bachmann@dlr.de
Abstract
With the increasing availability of data from research-oriented spaceborne hyperspectral sensors such as EnMAP, DESIS and PRISMA, and in order to prepare for the upcoming global hyperspectral mapping missions CHIME and SBG, the provision of well-characterized analysis-ready hyperspectral data is of increasing interest.
Within this presentation, the design of the EnMAP Level 2A Land product is illustrated, highlighting the necessary processing steps for CEOS Analysis Ready Data for Land (CARD4L) compliant data products. This includes an overview of the design of the metadata and quality layer. The main focus is set on the necessary pre-processing chain, as well as the resulting challenges of these procedures.
The processing of the archived raw L0 data to L1B user products includes the radiometric calibration to Top-of-Atmosphere (TOA) Radiance, an advanced approach for the interpolation of defective pixels, and the correction of non-linearity, straylight and other sensor-related effects. Also, the L1B product is spectrally fully referenced, taking spectral smile into account if required.
Next, for generating the L1C products, the orthorectification also includes the co-registration to a Sentinel-2 global master image, and uses the COPERNICUS DEM (GLO-30). With these design considerations, a high relative geometric consistency between EnMAP and Sentinel-2 data is ensured which enables an easy integration in multi-sensorial time-series.
Finally, the atmospheric correction to L2A products allows for the generation of a “land” product (Bottom-Of-Atmosphere (BOA) reflectance) as well as two “water” products (BOA water leaving reflectance as well as BOA subsurface irradiance reflectance). The L2A water algorithm is based on the Module Inversion Program (MIP) by EOMAP, and the EnMAP Level 2A land processor is based on DLR’s PACO (Python-Based Atmospheric Correction). PACO is a descendant of the well-known ATCOR, and is also implemented as the L2A processor within the DESIS ground segment. Because of this heritage, the advantages and shortcomings are well understood, and the good overall performance is shown in the results of many comparison studies.
Already for the L1B product, the full set of quality-related metadata and quality layers generated by the L1C and L2A processors is provided. This includes per-pixel flags for Land, Water, Cloud, Cloud Shadow, Haze, Cirrus, Snow, and also for Saturation, Artefacts, Interpolation and a per-pixel quality rating. In addition, important quality-related parameters are provided in the metadata, e.g. the percentage of saturated pixels, the scene mean Aerosol Optical Thickness and Water Vapor content, as well as the RMS error of the geolocation based on independent check points.
Thanks to this operational approach, the end user of EnMAP will be provided with ARD products including rich metadata and quality information, which can readily be integrated in analysis workflows, and combined with data from other sensors.
Accurate and thematically detailed map products representing the intra-annual distribution of vegetation cover are crucial for a variety of environmental applications, e.g., for monitoring ecosystem disturbances, productivity or health. Open archives of dense temporal multispectral Landsat and Sentinel-2 data together with powerful processing workflows have significantly advanced the intra-annual analysis of vegetation cover during the past decade. With recent and upcoming scientific (e.g. PRISMA, EnMAP) and operational (e.g. CHIME, SBG) spaceborne imaging spectroscopy missions, multitemporal hyperspectral data will complement these multispectral archives. The high spectral information content is expected to facilitate more detailed and quantitative vegetation analyses. However, a well-founded understanding of the benefits of spaceborne hyperspectral data compared to multispectral satellite data is still missing. Moreover, generalized processing workflows that optimally exploit the rich spectral information content are required for an automated production of vegetation cover maps with regular intra-annual update cycles.
This study presents a processing workflow for generating standardized, intra-annual vegetation fraction maps from EnMAP data. The workflow comprises the development of a combined multi-date and multi-site spectral library, synthetic training data generation from the combined spectral library and subsequent regression-based unmixing for fractional cover estimation based on the synthetic training data. The workflow was tested on simulated EnMAP data derived from AVIRIS-Classic imagery with regional coverage over three study sites in California. Imagery for each site was acquired in spring, summer and fall 2013, thus representing the intra-annual distribution of vegetation cover during the different key phenological phases of a year. The study sites comprised a variety of different natural and semi-natural Mediterranean-type ecoregions with diverse vegetation assemblages and ecotones, i.e. transitions between grasslands, shrublands and woodlands/forests.
Results demonstrated the great use of our regression-based unmixing workflow for producing accurate, intra-annual fraction cover maps for needleleaf trees, broadleaf trees, shrubs, herbaceous vegetation and non-vegetation. Average Mean Absolute Errors (MAE) over all classes were below 13% and class-wise MAE were between 3 and 20%. Compared to discrete classification maps representing the dominant class per pixel, our vegetation cover fraction maps provided a more realistic representation of the ecoregions and ecotones. This particularly applied for areas comprising sparse vegetation cover (e.g., arid shrublands), multiple vegetation assemblages (e.g. open canopy woodlands or mixed forest) or vegetation cover at different successional stages (e.g., recovery on formerly disturbed areas). The use of a combined multi-date and multi-site spectral library enabled generalized unmixing models, i.e., single models per class that were applied across all sites and dates. No loss in map quality was found when compared to site- and date-specific mixing models, indicating the great value of such combined libraries for model generalization. Relative comparison to multispectral data revealed the superiority hyperspectral EnMAP data, particularly for disentangling the fraction cover of different woody-vegetation types.
Our study exploited simulated imagery representative of the hyperspectral EnMAP mission. Due to the generalizing capabilities of our workflow, we are confident that the approach can be similarly applied to forthcoming operational spaceborne imaging spectroscopy data from the CHIME or SBG missions. Given the diversity of vegetation cover within the analyzed ecoregions and ecotones, our study sites depict a representative cross-section of structurally similar natural and semi-natural ecosystems globally. We therefore conclude that our findings provide a vital stepping stone toward wall-to-wall, intra-annual vegetation cover fraction maps from global spaceborne imaging spectroscopy missions.
Remote sensing over coastal and inland waters is entering a new phase with the launch of hyperspectral sensors EnMAP, DESIS and PRISMA and with the upcoming PACE mission. It is expected that by exploiting the hyperspectral data, satellite products can be improved and new algorithms developed, advancing water colour remote sensing in these (mostly) optical complex waters. One important step of data processing is the atmospheric correction (AC) that aims to remove the atmospheric, surface and bottom influences from the signal measured by the sensor at the top-of-atmosphere, apart from other influences (e.g. adjacency effects). The remaining water signal is the main information used as input in the algorithms and that usually is only a small percentage of the total signal measured by the sensor. Thus, the quality of the retrievals strongly depends on a successful AC and on the radiometric stability of the sensors. In preparation to the EnMAP mission, we evaluated the Polymer AC algorithm applied to data from DESIS and PRISMA. Polymer is a spectral matching algorithm in which atmospheric and oceanic signals are obtained simultaneously using the fully available spectrum. It is available as a python package and has been largely applied to ocean colour sensors. In this presentation, we will show first results on Polymer AC applied to Level 1 data of the DESIS, PRISMA and Sentinel-2 MultiSpectral Instrument (S2-MSI) over coastal and inland waters. The Level 2 radiometric and chlorophyll-a (Chl-a) retrievals from the different sensors are intercompared and validated against in situ measurements collected by AERONET-OC stations and field campaigns (Tagus estuary, Lake Constance). First results of Polymer applied to DESIS data at different study regions show similar spatial distribution of Chl-a to S2-MSI.
The quality and ecological status of Inland and coastal waters (ICWs) is a key worldwide issue because of the multiple and conflicting pressures from anthropogenic perturbation and environmental change. Timely monitoring of ICWs is therefore necessary to enhance our understanding of their functions, the drivers impacting on them, and to deliver effective management.
Earth Observation (EO) may be used for acquiring timely, frequent synoptic information from local to global scales of ICWs. EO data have been successfully applied for mapping waterbodies for decades, even though the current satellite radiometers are designed for observing the global ocean (e.g. Sentinel-3 OLCI), or land surface (e.g. Sentinel-2 MSI, Landsat 8 OLI, Landsat 9 OLI-2) and not specifically suited for observing processes and phenomena occurring in ICWs. These aquatic ecosystems can be a mixture of optically shallow and optically deep waters, with gradients of clear to turbid and oligotrophic to hypertrophic productive waters and varying bottom visibility with and without the presence of aquatic vegetation (floating or submerged). Deriving ICW quality products from the existing sensors thus remains challenging, due to their optical complexity, as well as the spatial and temporal resolution of the imagery.
PRISMA (PRecursore IperSpettrale della Missione Applicativa), the new hyperspectral satellite sensor of Italian Space Agency (ASI) in orbit since March 2019, provides data with high spectral resolution and good radiometric sensitivity able to resolve small changes in the signal relative to the noise of the sensor and the atmosphere (i.e., high radiometric resolution and high signal to noise ratio). Moreover, the spatial and spectral resolutions of the PRISMA are well suited for the retrieval of multiple biophysical variables, such as optically active water constituents (chlorophyll, suspended and coloured dissolved organic matter) and phycocyanin. The PANDA-WATER project (PRISMA Products AND Applications for inland and coastal WATER), funded by ASI, aims to demonstrate the capabilities of PRISMA hyperspectral imagery to measuring ICWs and to evaluate its suitability and gaps to address inland and coastal ecosystem science and management challenges.
The overall objective of PANDA-WATER is to provide a set of innovative and validated products, derived from imaging spectrometry, that enables the retrieval of additional variables of interest for inland and coastal ecosystems. The novelty of the PANDA-WATER products will stem from the application of the state-of-the-art-algorithms adapted to the ICWs to the PRISMA increased spatial, spectral and radiometric resolution, thus resulting in augmented observational capabilities and lower associated uncertainties compared to the current Copernicus missions. These products will range from more accurate estimates of optically active water constituents, to more sophisticated products such as particle size distributions or distinguishing sources of suspended and coloured dissolved matter, water depth, natural or artificial materials floating over the surface, attenuation coefficient and euphotic depth, the presence of cyanobacteria and harmful algal bloom.
The product development carried out within PANDA-WATER from PRISMA data will also be suitable for the upcoming hyperspectral missions (i.e. DLR EnMap, Copernicus CHIME, NASA’s PACE and SBG), thus contributing to the global advance in spaceborne imaging spectrometry.
The Italian PRISMA (PRecursore IperSpettrale della Missione Applicativa) satellite mission was launched in March 2019 and is acquiring images on demand over the world. PRISMA is the only operative satellite acquiring hyperspectral data in the spectral range between 402 and 2496 nm (30m/pixel) with 234 bands and a panchromatic camera (5m/pixel). Different countries, even though with different stages of development, have in the plan or are close to launching similar payloads, such as the Environmental Mapping and Analysis Program (EnMAP) from DLR, the Surface Biology, and Geology (SBG) from NASA-USGS, and the CHIME mission of ESA. Hyperspectral data collected at VNIR – SWIR wavelengths have been widely reported in the literature for mapping geological outcrops and mineral absorption features occurring within transition metals (i.e., Fe, Mn, Cu, Ni, Cr, etc.) and alteration minerals that display absorption features associated with Mg-OH and Al-OH bonds.
In this preliminary research study, PRISMA potential for geological outcrop mapping was tested using a spectral-based methodology. The steps followed for the PRISMA imagery were: (1) PRISMA L2D (georeferenced reflectance data version 2.0.5.) were requested and downloaded from the ASI PRISMA website; (2) vegetation and water masking, (3) assessing and defining the main absorption band depths of the geological outcrops’ absorption features in unmasked pixels of the study area, (4) application of spectral classification techniques, i.e., the Continuum Removal Band depth (CRBD) and the Support Vector Machine (SVM).
Field campaigns in the Val di Taro areas and Shadan porphyry gold deposit for collecting ophiolitic and other rock samples were executed in November 2020 and June 2021, respectively. Moreover, laboratory spectroscopy analysis (ASD reflectance data acquisitions), X-ray diffraction (XRD), and scanning electron microscopy (SEM) analyses were performed on the rock samples collected in the study area.
A reasonable agreement between ground spectral measurements, laboratory analysis, and PRISMA data was found, this was also verified by field visual checking on different accessible areas in Val di Taro and by comparing the PRISMA outcrops map with a detailed (1:10000) geology map of the main deposits of the area. On the other hand, the results show a good correlation between PRISMA and the alteration map of the Shadan deposit.
The contribution of this preliminary study to the remote sensing community is to provide a first evaluation of the real PRISMA data quality in terms of radiometric and spectral accuracy for geological and mineral mapping based on suitable ground measurements on a mountain area of the Italian Northern Apennines suitable to provide information to the Italian Space Agency (ASI) and to the interested users on the potential of PRISMA hyperspectral satellite data. However, in this kind of application it is to consider: (a) the complex geometries (e.g., the study area is in a fragmented mountain area) of the acquired surfaces that surely affect the spectral quality (atmospheric correction) and SNR of hyperspectral data; (b) the complex geological outcrops composed of different mineral assemblages that surely makes a challenge their spectral identification and recognition.
The method presented here could result in a decrease of time and effort in the field, because it leads to an effective mapping of geological outcrops, thus facilitating the exploration of the Earth’s surface at different scales on a variety of platforms, but in no way replacing the field geologist. It could represent a valuable help for geological and mineral outcrops selection as an initial survey for geologists and a first step in filling the gap of a satisfactory knowledge of surface geological outcrops including naturally asbestos rocks related exposures. This is also in view of a green energy transition that requires among others cost-effective, socially tolerable, and rapid methods of exploration to map and preserve the existing but also the opportunity to discover new deposits.
Keywords: Hyperspectral satellite data, geological outcrops mapping, asbestos minerals, quartz-carbonate alteration, potassic alteration, propylitic alteration, sericite PRISMA
Olive tree cultivation (Olea europea) and olive oil production have accompanied humankind since immemorial times. Throughout the various civilizations to the present day, olive trees and olive oil have occupied a central role in the agricultural scenery and income of Mediterranean countries and in their commerce with neighboring populations. Globally, olive oil production has tripled in the last 60 years, reaching 3.2mt in 2019/2020, of which almost 90% is produced by Mediterranean countries, whose main producers are Spain and Italy. In Italy, olive oil production has reached 331kt 2019/2020, of which 5% was produced in Tuscany, where olive tree cultivation is one of the main agricultural activities. According to the latest collected data, in 2020 the total cultivated area of olive trees covered approximately 90000 ha, contributing to a total production of 117kt of olives, a production of 15kt of olive oil and a relative value of almost 130 million euros.
In the last 15 years, the total cultivated area of olive trees in Tuscany has decreased by about 6%. Of the actual total surface reported in 2021, 11% of it has been declared as non-productive. These data may highlight a trend of abandonment of olive trees cultivation, which might depend on various factors such as the increasing economic interest in cheaper seed-oils, or eventually the occurrence of adverse climatic conditions which threaten olive production, as happened this year, and demotivate small-holder farmers to invest in olive cultivation. The abandonment of olive trees does not only have an economic drawback for the region, but it also leads to a phytosanitary emergency as unmanaged olive yards might become an outbreak origin for diseases’ propagation.
The Regional Administration has therefore showed up an increasing interest in monitoring the territory land use and in detecting olive trees cultivation abandonment, in order to develop an efficient plan for lands’ requalification and/or reconversion and to deliver an accurate financial mobilization to support local farmers.
To monitor land use and classify crops, remote sensing plays a key role taking advantage of the aerial and satellite imagery available today.
By now, the existing methods used by the Region for land monitoring, which are based on photointerpretation of high-resolution airborne imagery, do not respond accurately to the problem as the available datasets are inaccurate when it comes to defining the realistic number of cultivated olive trees’ areas and even more when identifying abandoned yards. It is of common awareness that olive trees monitoring is complex, as olive plants are plurennial and the management of the crop and the underneath soil cover, might vary from farm to farm. To overcome this challenge, it is fundamental to build up stronger classification models which combine high resolution imagery information with both historical time series and detailed spectral signatures.
Within this framework, PRISMA hyperspectral satellite imagery delivered by ASI is tested as a tool to retrieve and classify olive cultivations and the probability of land abandonment, also in combination with long-term multispectral datasets from other satellite missions and with a high-resolution aircraft dataset to be used for validation.
For the development of the model, we combine the following datasets:
• high resolution airborne visible images acquired in 2019
• high resolution hyperspectral airborne (HySpex sensor) data acquired in 2020
• multispectral reflectance time series obtained from Sentinel-2 from 2019 to 2021
• reflectance spectra obtained by PRISMA hyperspectral images from 2019 to 2021
• multispectral reflectance time series obtained by Landsat 7-5 from 1984 to 2021
Airborne visible images at high resolution (15 cm) are used for visual ground truthing and for the construction of a library of about 200 olive cultivated areas in the study region (Grosseto, Tuscany-IT). Long term reflectance time series from Landsat and Sentinel 2 are used to retrieve the temporal signature including both the phenological variability and the long-term changes associated to gradual land use changes such as land abandonment. Airborne hyperspectral data acquired at the same time of a PRISMA scene are used to investigate the different spectral signatures of individual olive trees and soil under different managements, at a spatial resolution (1.5m VIS-NIR and 3m SWIR) capable of distinguish the two contributes to the spectra.
The classification methods are based on a random forest and a pattern recognition artificial neural network framework, combining both the spectral and the temporal variability in pixel-based mode. Our results provide relevant information on:
1. seasonal trends with distinguished specific patterns for grassy-covered and soil management practices (tillage etc.)
2. a multi-year trend of vegetation growth for abandoned olive trees, under no maintenance nor management
3. olive tree and soil spectral signatures spatial and temporal variability.
Given the PRISMA spatial resolution (30m) that necessarily combines both the olive plants and the soil in the same grid cell, we finally investigate the different types of soil and plants contribution to the average grid cell signature, highlighting the capability and the limitations of PRISMA in the detection of this type of mixed landscape, that are typical also of other agricultural conservative and agroforestry practices.
Hyperspectral imagery has immense potential for the oceanography as it contributes greatly to the understanding of marine ecosystems and provides valuable information about the unique characteristics of different aquatic systems. However, it also brings some significant challenges: high cost of hyperspectral sensors; difficulty to keep a reasonable signal-to-noise ratio for the bottom of atmosphere reflectance over a narrow spectral band; significant amount of data volume leading to the need of high computation resources. Due to these limitations multispectral missions have long been the primary source of optical remotely sensed data.
The Black sea coastal water in vicinity to Danube delta is especially challenging for ocean color remote sensing due to the complex properties of water from different origin: riverine is mixed with marine water, classified as Case 2 and their non-pigmented particle concentration does not covary in a predictable manner with the chlorophyll-a concentration. The standard bio-optical algorithms often fail to describe the complexity of the Black Sea.
For this type of turbid coastal waters, the combination of both multi- and hyperspectral imagery can contribute greatly to understanding the particularity of complex water basins and provide additional insights about the processes on the sea surface. In this study we compare available PRISMA and Sentinel-2 images in order to better identify the surface signature of riverine water in the Black Sea coastal waters near the Danube Delta. We compare the remote sensing reflectance signal from both sensors, analyze the characteristic reflectance of different coastal regions and types of water, and draw conclusions about the benefits of using hyperspectral images.
PRISMA (PRecursore IperSpettrale della Missione Applicativa) is a pre-operational hyperspectral sensor developed by the Italian Space Agency (ASI). Launched on March 2019, the PRISMA mission is mainly devoted to expert users, as scientific researchers, Earth Observation private companies and institutional organizations, interested in algorithm implementation, products and applications development, as well as environmental mapping and monitoring. In the framework of PRISCAV project (Scientific CAL/VAL of PRISMA mission), funded by ASI and started in 2019, ground based and airborne Fiducial Reference Measurements (FRM) simultaneous to PRISMA overpasses over different targets (agriculture, forest, sea, inland and coastal water, snow) were gathered to assess PRISMA radiometric performance.
In this context, an evaluation of remote sensing reflectances (Rrs) derived from PRISMA hyperspectral imager was performed within the visible and infrared range (VNIR) over inland and coastal sites. Sentinel-3 OLCI imagery and above-water in situ reflectance measurements from autonomous hyper- and multispectral radiometer systems were used to evaluate the performance of PRISMA Level-2D (L2D) surface reflectance, a standard product distributed by ASI. PRISMA L2D products were also compared to Rrs data derived from the atmospheric correction tool ACOLITE, adapted for PRISMA processing.
In this study, three optically diverse Italian sites, equipped with fixed positioned autonomous multispectral and hyperspectral radiometer systems, were selected for the comparison: Lake Trasimeno, a shallow and turbid lake in central Italy; the Acqua Alta Oceanographic Tower (AAOT), located 8 nautical miles off the lagoon of Venice in the Adriatic Sea and characterized by clear to moderately sediment dominant waters; and the Oceanographic Observatory (OO), mounted at about 3.3 nautical miles southwest of the island of Lampedusa, where oligotrophic water and stable conditions are present.
At the time of submission, a total of 26 PRISMA images, 30 OLCI L2-Water Full Resolution (WFR) products, and available synchronous in situ measured reflectances were collected for the match-up analysis. Common statistic metrics were used for the quantitative assessment considering each single site and the combined dataset. The results demonstrated the quite good performance of PRISMA over the range of optical properties that characterize the three investigated waterbodies. Overall, ACOLITE Rrs showed lower uncertainties, better correlation and closer spectral similarity with in situ measurements than PRISMA L2D, especially in the central part of the VNIR, between 450 and 600 nm. Compared with PRISMA L2D, ACOLITE outputs were also more consistent with concurrent OLCI L2-WFR data, resulting in significant improvements against the PRISMA standard products in the blue spectral region.
This study, beside to represent a key element of PRISCAV project, will also be relevant for aquatic ecosystem applications with the upcoming spaceborne hyperspectral missions, such as Copernicus Hyperspectral Imaging Mission (CHIME), NASA Surface Biology and Geology (SBG) and Plankton, Aerosol, Cloud, ocean Ecosystem (PACE), DLR Environmental Mapping and Analysis Program (EnMAP), and is involved in the pre-formulation studies of the PRISMA Second Generation (PSG).
Crop simulation models estimate yield gaps and help determine the underlying Genetic – G, Environment – E, and Management – M (G×E×M) factors impacting yield. Crop models are therefore critical components of agricultural monitoring and early warning systems. Earth observation image data track crop growth and development frequently over large areas, so they are increasingly used to drive or improve the models. The data typically consists of a few spectral broadbands over the optical range (400-2500nm). These bands are too coarse to distinguish many important absorption/reflectance features related to crop yield. Hyperspectral image data on the other hand consists of several narrowbands (< 10nm) that are sensitive to these features, but are in short supply. PRISMA is the first hyperspectral Earth observation platform in nearly 20 years and is a precursor to several upcoming missions.
In this study we provide a first assessment of PRISMA narrowbands in estimating end-of-season crop biomass and yield for four important food crops (corn, rice, soybean, wheat) at key stages of crop development (vegetative, reproductive, maturity). Reference data was collected in a field campaign in 2020. It consisted of 60×60m2 survey frames over which dry-weight crop biomass and yield samples were collected. We compared performance against Sentinel-2, which is increasingly used for agricultural monitoring due to its relatively high spatial, spectral (red-edge, near infrared narrowbands), and temporal resolution. The evaluation was performed in two stages. First, we used partial least squares regression (PLSR) to uncover known and unexpected spectral features in the PRISMA data. Second, we used random forest to predict yield with PRISMA and Sentinel-2 data. The PLSR analysis confirmed expected relationships between spectra and crop biomass/yield at the vegetative and reproductive stages. These relationships diminished during maturity when photosynthesis declines. The PLSR analysis also revealed narrowbands in the near infrared had less influence on crop yield estimation than anticipated. We suspect unusual data spikes in the near infrared may have been the cause. The PRISMA and Sentinel-2 random forest models were able to estimate end-of-season biomass (R2=0.67, 0.58) and yield (R2=0.62, 0.59) reasonably well. Predictions were strongest at the vegetative and reproductive stages of development. Shortwave infrared narrowbands and red-edge narrowbands were the most important in the PRISMA and Sentinel-2 models, respectively. PRISMA and Sentinel-2 showed clear complementarity in this study, so future work should explore integrating/fusing these two sources of data. The extent of our study and the sample size was relatively small, so additional campaigns should be carried out to confirm the robustness of our results.
The Italian Space Agency (ASI) satellite mission PRISMA (PRecursore IperSpettrale della Missione Applicativa) provides an important opportunity for the advancement of satellite hyperspectral data exploitation in a variety of scientific, commercial, and environmental applications. Within this framework, the ASI ‘PRISMA SCIENZA’ call for proposals is aimed at fostering the scientific exploitation of PRISMA data and, at the same time, improving the satellite hyperspectral remote sensing know-how.
The HYPERHEALTH project has the main objective of developing a monitoring system for outdoor human activities that leverages information extracted from PRISMA data in conjunction with images from other satellite missions as well as ground-based sensor data. The ultimate goal is assessing the environmentally-induced risks for human health due to humidity, carbon dioxide, other gases or particles (e.g. pollens, allergens), excessive Ultra Violet (UV) radiation. The HYPERHEALTH system is intended to provide useful information for public safety organizations and in general, for the authorities aimed at ensuring protection and safety of citizens. Furthermore, by coupling HYPERHEALTH with the development of ad hoc applications, each individual citizen may be given detailed information about environmental conditions, get safety tips, and be warned of the possible risks connected to outdoor activities.
HYPERHEALTH projects will include the following research activities:
1) Development of novel methods that leverage machine learning to estimate atmospheric constituents from PRISMA data, with specific emphasis on water vapor and carbon dioxide columnar contents.
2) Development of novel methodologies based on machine learning and the forward modeling approach to search, within PRISMA images, surfaces covered by vegetations species that may be allergenic for human beings with the aim of monitoring their flowering status and, in turn, identifying the pollen allergenic risk zones.
3) Analysis and development of techniques to extrapolate albedo data from PRISMA images in order to enable analysis of the UV radiation reflected from the surface.
4) Development of methods to fuse/integrate data and images taken at different spatial/temporal scales exploiting PRISMA data together with data from Sentinel 2, Sentinel 5p, CAMS, SEVIRI –the goal is augmenting the richness of information to (jointly) be extracted from the data.
5) Analysis and testing of suitable methods for validating HYPERHEALTH system performance.
By bringing methodological advancements in the arena of health-driven environment characterization, HYPERHEALTH project will have impact on a variety of fields of interest, such as Air Quality, Natural and Man-Made Hazards, Ecosystem Structure & Composition, Vegetation & Forestry. Furthermore, realization of the HYPERHEALTH project will allow for prompt results exploitation aimed at ensuring citizens’ health, paving the way towards innovative citizen digital services.
Preliminary results that will be available will be illustrated at the conference.
PRISMA (PRecursore IperSpettrale della Missione Applicativa) is a demonstrative spaceborne mission, fully deployed by the Italian Space Agency (ASI). To support the calibration/validation activities of the PRISMA hyperspectral mission, ASI and the National Research Council (CNR) started in 2019 the PRISCAV project (Scientific CAL/VAL of PRISMA mission). The main objective of PRISCAV is the comprehensive characterization of the performances of the PRISMA payload in orbit in different operational scenarios and the verification of the durability in time of the performances.
To this end, PRISCAV created a network of instrumented sites (12) showing different land-use and surface settings (Snow; Sea; Inland and Coastal Water; Forest and Cropland) to obtain independent and traceable in-situ and airborne Fiducial Reference Measurements (FRM) simultaneous to PRISMA acquisitions in order to assess the required performance of sensor, data products, and processors at the different levels (i.e. Top-of-Atmosphere Level 1 Radiances and Bottom-of-Atmosphere Level 2 Reflectance standard products).
To date, over 250 PRISMA acquisitions were collected over the target sites. Ground teams ensured a simultaneous land-use classification and an appropriate atmospheric characterization. This enabled a multiscale spectral matching with ground targets and the assessment of key parameters related to the spectral, spatial and radiometric performances of PRISMA over the mission duration till now, as well as their evolution with the different versions of the processors. The results of the PRISCAV project obtained so far are highly promising, in line with the mission requirements, over the range of surface properties that characterize the investigated sites, confirming the potential of the PRISMA mission for the development of innovative products and new applications in the field of environmental monitoring and earth observation in general.
The rich amount of information contained in hyperspectral satellite images can be exploited to generate a land cover/use classification of the vegetation and, in particular, to determine forest fuel. Thanks to the fact that such classification can be statically mapped to the so called “fuel models” (association between a fuel type and its physical parameters), it is the core of the process of creating of a Forest Fire Fuel Map product. Such maps are of high relevance in developing fire hazard maps, run fire propagation models, compute vulnerability maps and plan fuel removal practices.
In the framework of the ASI (Italian Space Agency) project “Sviluppo di Prodotti Iperspettrali Prototipali Evoluti” (Contract ASI N. 2021-7-I.0), a prototype processor based on PRISMA (PRecursore IperSpettrale della Missione Applicativa) imagery has been developed for forest fire fuel mapping.
Currently there is no training dataset which is detailed enough and accurate to be considered suitable to exploit the spectral information of satellite’s hyperspectral sensors and for enabling their high discrimination capability. For this reason, two different approaches have been proposed to generate such training dataset automatically and to make possible a supervised machine learning approach for forest classification.
The first approach relies on an automatic process for refining existing land cover, such as the Corine Land Cover. At European level, the Corine Land Cover, although based on satellite’s multispectral data, is the most complete land use/land cover classification available and using it as reference is a good starting point to create a dataset for training a Machine Learning (ML) model. However, this kind of layer could be outdated in some parts, and it is necessary to clean the dataset by removing outliers. For this reason, the exploitation of the Corine Land Cover requires an automatic refinement step to detect and remove outliers, based on algorithms of spectral matching and methods of robust model fitting.
The second approach attempts to build a training dataset, starting from few but very reliable ground truths. Indeed, reliable ground truth data about forest types are usually insufficient (few and sparse in the area of interest covered by a hyperspectral image footprint) for training machine learning models, and a procedure for exploiting PRISMA images to increase the dataset size has to be developed. In this respect, a newly proposed methodology has been adopted, based on increasing the dataset with similar pixels for each class exploiting spectral similarity measures.
Once the training dataset is generated, based on one or both of the approaches above, the different ML algorithms (random forest, SVM, etc.) are experimented to design and build the model for forest classification.
All the described processes, that is training set generation, learning process and prediction are executed for each PRISMA data. To this aim, the whole generated training data set is automatically split into two datasets with different ratios, the first used for training the model and the second for testing in order to control the generalization capability of the trained model and the quality of the prediction results.
The obtained forest classification map is used to generate the Fire Fuel Map, by associating each classified pixel to the standard fuel model (Anderson) representing its proneness to fire. Some attributes directly related to the fuel model class are also provided, such as the fuel load for living and dead components, extinction humidity, flame height, and propagation rate.
In order to assess its performances, the developed algorithm has been experimented, with very promising results, on several PRISMA images acquired over Latium and Sardinia (Italy).
The PRISMA (Precursore iperspettrale della missione operativa) satellite mission of the Italian Space Agency (ASI) provides hyperspectral Earth observation data, thanks to sensors in the visible and shortwave infrared (from 400 to 2500 nm) portion of the radiation spectrum (240 bands in total) and a panchromatic camera with higher spatial resolution. The satellite has been operational since 2019, with an expected mission lifetime of 5 years. The PRISMA acquisition strategy is based on user requests and a mission background, providing 30 km x 30 km images with 30 m nominal spatial resolution for the hyperspectral data. Its observational capabilities are near-global and depend on the solar illumination conditions.
Imaging spectroscopy has multiple applications, ranging from agriculture, air quality, mineral exploration and ecosystem monitoring, among many others, and recent works already highlighted the potential of PRISMA data for such purposes, especially in synergy with other missions (such as multispectral and optical missions). In this work we focus on the possibility of measuring key biophysical parameters in coastal and oceanic environments. We discuss a novel approach for the simultaneous retrieval of atmospheric and marine parameters, including atmospheric aerosol properties and ocean water characteristics, including chlorophyll concentration and the presence of sediments.
The methodology relies on an inversion based on a fully coupled ocean-atmosphere radiative transfer model, with the aim of providing output maps at a high spatial resolution, and it attempts to reduce computational costs by testing different spatial and spectral samplings. The procedure is designed for seamless processing of both open ocean waters and optically complex coastal acquisitions, as the high spatial resolution of PRISMA allows it to capture fine scale features. But since the retrieval procedure is computationally demanding, novel algorithmic methods based on non-parametric approaches are discussed. These statistical methods have been already applied with success for the retrieval of land biophysical parameters (such as vegetation properties), obtaining significant computational efficiency compared to more traditional procedures based on full radiative transfer models, still preserving good accuracy and with flexibility for extrapolation.
The PRecursore IperSpettrale della Missione Applicativa (PRISMA) is an Italian hyperspectral satellite mission launched in March 2019. The VNIR-SWIR sensors cover the wavelength range from 400 nm – 2500 nm in ~240 spectral channels with a spectral resolution of ~12 nm and a spatial resolution of 30 m. PRISMA data offer a fast and cost-effective possibility to meet the demand of the industry for efficient prospecting and mineral exploration techniques. In this study, atmospherically corrected and orthorectified L2D VNIR-SWIR PRISMA product data of mineral deposits in the Iberian Pyrite Belt (IPB) in Spain are evaluated regarding their potential to detect mineral composition variations based on their wavelength shift of the mineral-diagnostic absorption feature. Additionally, the capability of the L2D data to identify iron-hydroxides, sulphates, carbonates and phyllosilicates is investigated. Field-based hyperspectral AisaFENIX imaging data of the Los Frailes open pit in the east of the IPB are used for validation.
The IPB, located in the south of Portugal and Spain, is one of the world’s largest polymetallic massive sulphide complexes, with originally >1700 Mt of massive sulphides. The massive sulphides are hosted by a Volcano-Sedimentary-Complex (VSC) formed in a basinal facies during the Variscan orogeny. The VSC overlies the Phyllite-Quartzite Group and is overlain by the Culm Group. The hydrothermal alteration zonation associated to massive sulphide ore bodies is comprised of an inner chlorite-rich and a peripheral sericite-rich zonation. Other VNIR-SWIR-active minerals such as jarosite, calcite, gypsum, dickite/kaolinite and iron-hydroxides occur in the adjacent rocks.
The analysis of the hyperspectral data is performed with the multi-range spectral feature fit (MRSFF) to detect mineral occurrences and the Wavelength Mapper (ITC, Netherlands) to determine absorption feature depths and absorption feature wavelength shifts. The detection of white mica and its mineral chemistry based on the wavelength position of the Al-OH absorption maximum offers reliable results and is in accordance to the field-based analysis results of the AisaFENIX data. The identification of the Fe-OH absorption feature of chlorite and the according absorption maximum wavelength shift due to Mg-substitution in chlorite is less clear and influenced by higher noise levels in the longer SWIR wavelength ranges of the PRISMA data. The mapping of Fe-bearing minerals such as jarosite and goethite coincides with the results of the field-based mineral analysis results, although the identification is hampered by the residual of the water absorption band at 940 nm. The AisaFENIX data of the Los Frailes open pit show the occurrence of dickite/kaolinite and gypsum only in small areas in the northern pit face. The medium spatial resolution of the PRISMA sensor cannot capture reliably these small-scaled occurrences. However, a clear identification of these minerals is expected in areas where they occur in larger extent, as can be seen in Heller Pearlshtien et al. 2021. The calcite identification is also influenced by the decreasing signal-to-noise level in the longer SWIR wavelength region and therefore challenging to capture.
This study shows the potential to use PRISMA L2D data as a fast and cost-effective tool to detect characteristic minerals associated to massive sulphides in the Iberian Pyrite Belt. The results show the capability to identify VNIR-SWIR active minerals and even subtle wavelength shifts of absorption maxima, which are indicators of changing mineral chemistry. However, the detection is influenced by residuals of atmospheric absorptions and a lower signal-to-noise ratio in the longer SWIR wavelength region.
earthbit PRISMA edition is a desktop SW application aimed to the quick management and full visualization of Earth Observation data products with a vertical specialization to the interaction and manipulation of PRISMA hyperspectral mission products.
User can have a simple interface enabling a straightforward interaction with data and meta-data composing the HDF data files. All the spectral bands can be viewed with one-click, meta-data can be searched, interpreted and plotted while the file structure complexity remains transparent. Earthbit also with adds functions for data interpretation, such as signature visualization of each product from each band, pixel geolocation on WGS84 map on the fly, metadata overview and visualization of the additional dataset or plotting of vector attributes.
Earthbit’s next release also include the Python API able to act as a bridge between PRISMA data and python standard libraries. It will also allow the integration of external plug-ins (python and C++) and the implementation of interactive processing workflows with the real-time display of results.
The earthbit development environment was born as a tool able to manipulate very big EO data sources, as SAR and hyperspectral images together with image streams (e.g., live video from drones) in real-time. It allows to create, configure and execute massively parallel processing tasks (specific for satellite imagery or science data) on big datasets by leveraging the power of a proprietary map/reduce framework.
Its Human Machine Interface enables the user to easily interact with algorithms, image data and unstructured metadata and exploit the power of heterogeneous computing devices such as modern multi-core CPUs, GPUs and Accelerators (FPGA and ASICs with OpenCL support). These technologies allow to reach the following benchmarks:
• Load ~4GB image from disk to memory in less than 15s.
• Create image pyramids on the fly, with in-memory caching of tiles.
• Maximize the use of Solid State Disks.
• Execute real-time image filtering at about 400fps on GPU.
It supports simultaneous visualization of different images that can be navigated in co-registration mode, providing real-time graphical operation on them.
By earthbit user can:
• Load datasets and attributes from hierarchical and generic data files (HDF5, HDF-EOS, TIFF, JPEG)
• Visualize and process big images and datasets
• Execute processing and visualization algorithms on multicore CPUs and discrete GPUs, thanks to a proprietary acceleration engine integrating Khronos OpenGL and OpenCL API for parallel applications.
• Plug own algorithms for image processing, exploiting the earthbit SDK features
• Use an editor for Python scripting and product processing and has support to creation of Python plugins
The earthbit SDK provides dynamic linking libraries for different operating systems: Microsoft ® Windows10 (32bit & 64bit), Linux RedHat, Ubuntu Linux, CentOS 7, Gentoo Linux, Apple® macOS Sierra and Mac OS X and running on the following proc. Architectures as Intel/AMD x86 and x86_64, ARM ARMv7-A and ARMv8-A
One of the greatest challenges of the 21st century is the climate change and emissions are one of its biggest drivers. With the ambitious aim that Europe will become the world’s first carbon-neutral continent by 2050 and to make the European Green Deal a reality, it will be crucial to reduce these emissions within the next decade. Here, the energy sector plays a key role as it is accounting for 75% of the emissions. With fossil fuels still supplying more than 70% of Europe’s and 80% of the world’s energy, and the goal to increase the share of energy consumed in Europe coming from renewable sources to 40% by 2030, the energy sector must undergo significant changes. Energy system modelling is looking for solutions that lead to climate-neutral and cost-effective energy systems by evaluating the current state of energy infrastructures and modelling different change scenarios. The scientific analyses that are needed consider a wide range of evaluation criteria. In addition to assessing land potential, identifying suitable sites, considering environmental parameters, balancing land use interests, and capturing trends and impacts on landscaping, a flexible design of the generation, transportation, redistribution and storage of energy between sectors (gas, electricity, heat and hydrogen) is key for a sustainable implementation in line with climate targets. Therefore, the availability of high-quality and up-to-date data on the existing energy infrastructure is an important component for managing the energy transition, but at the same time one of its greatest challenges as these data sets are often not (freely) available or are of poor quality (i.e., incomplete, contradictory, inconsistent).
Against this background, satellite-based Earth Observation represents an increasingly valuable resource for closing this gap as satellite data available today not only have a very high spatial resolution of more than 10 m, but also cover the earth with a very high temporal resolution. Furthermore, state-of-the-art machine learning (ML) techniques (including deep learning (DL)) have proven to be extremely valuable for a variety of applications in different areas of remote sensing and have shown great potential for analysing large-scale challenges such as urbanization or climate. However, thus far no approaches have been proposed in the literature to automatically and effectively map energy infrastructure types on an operational basis by combining C-band SAR and optical satellite imagery. Hence, in our study we present a novel and robust system based on state-of-the-art deep neural networks (DNN) for generating accurate maps of single wind turbine (WT) installations in wind power plants by exploiting multi-temporal statistics of EO-based products from Sentinel-1 and Sentinel-2. Specifically, we extract for each pixel temporal statistics (e.g., temporal maximum, minimum, mean, standard deviation) of different S2 based spectral indices (e.g., vegetation index, built-up index, water, index, etc.) derived after performing cloud masking and S1 temporal statistics of the backscattering intensity for different polarizations and pass types. To reduce processing time in the prediction model generation only those S1/S2 bands and indices are used which have been identified to be the most suitable for detecting energy infrastructure types on the basis of the reference data and evaluating different common separability metrics.
The success of DNNs is greatly influenced by the availability and quality of training data. While there are specific DNN training databases for various applications, there is none of energy infrastructure (i.e., wind, solar, coal power plants) to carry out this task at a global scale. Therefore, existing databases on wind turbines are filtered and exploited to manually collect training and validation samples on a global scale. By collecting training data from locations all over the world, the variety of construction characteristics for the different infrastructure types that exist are covered and hence, a robust prediction model and the transferability for future global analyses is assured. After training samples have been identified and labelled, image chips have been prepared for the predictors. For mapping the wind turbines, a convolutional neural network (CNN) in object detection mode has been employed, which has the advantage that also multiple wind turbines in an image patch can be detected. This is of particular relevance due to the WT detection task, where, due to the small scale of the individual installations, it is likely to encounter more than one turbine per image patch. The performance of the individual models has been quantitively assessed by means of state-or-the-art scoring and evaluation metrics, in specific: accuracy, precision, recall, F1-Score as well as Intersection over Union (IoU) and mean average precision (mAP).
The proposed system holds great potential as it allows to obtain maps on energy infrastructure of higher quality and greater scale with respect to the SOA, which can be ideally employed as a ready-to-use tool for energy modelers.
On April 29th 2021, the Earth Observation (EO) satellite Pléiades Neo 3 was successfully launched. On August 10th, its twin sister, Pléiades Neo 4 joined her in orbit. This marks the entry of European Satellites on the 30cm imagery market. In 2022, Pléiades Neo 5 and 6 will be launched in order to complete the 4-satellite constellation.
Alright, but what is Pléiades Neo? It consists of 4 EO satellites, providing 30cm optical imagery, entirely funded and operated by Airbus Defence and Space. After more than 30 years of experience in satellite imagery services, it seemed like the logical way forward. However Pléiades Neo is also the result of a whole new approach in terms of image quality and satellite capability. It has required rethinking the way we design satellites and exploit their services to answer the most demanding requirements in the field of Defence and Security, ensuring safety of operators and civilians over the world.
Highest precision with massive acquisition
Firstly Pléiades Neo provides 30cm native resolution meaning that the image shot by the satellite is the actual image you receive in terms of resolution. The image therefore provides an incredible amount of details that don’t appear on lesser resolution imagery, for instance: you can make the difference between light and armoured vehicles, see road markings, marks in the sand, people gatherings, distinct animals and people thanks to their shadows. The geolocation accuracy which measure the exact placement of an object on an image, is below 5m CE90. In terms of acquisition capacity, the constellation is able to acquire up to 2 million square kilometers, every single day. Two million square kilometers at 30cm resolution fully dedicated to customers, every day.
Introducing intraday revisit
It is also the first time Airbus provides an intra-day revisit capability within the same constellation. Indeed depending on the incidence angle of the satellite and the latitude of the Area Of Interest (AOI), Pléiades Neo can provide between 2 and 4 revisits per day. More particularly the tests that have been conducted over Tripoli, Lybia, have shown a minimum of 2 revisits per day and a maximum of 3, providing over 28 days a total of 64 revisits.
Ultimate reactivity tasking and image delivery
Work plans are updated every times a satellite enters into S-band contact, be it every 25 minutes (an orbit is 100min, 1h40), or 15 times per day per satellite. It represents around 60 plans uploaded every day at the constellation level.
Work plan are also pooled. This means that when an image is to be collected by one satellite, the related acquisition request is removed from the tasking plans of the other satellites.
These multiple and synchronised work plans per day enable easy handling of last-minute tasking requests – which can be placed up to 15min before S-band contact- as well as integration of the latest weather information, for an improved data collection success rate.
In addition, Airbus Defence and Space’s network of ground receiving stations, enabling an all-orbit contact and thus ensuring near real-time performances worldwide and rapid data-access, ensure the highest standards in terms of reactivity of our service.
Images are downlinked at each orbit, automatically processed and quickly delivered to the customer, allowing faster response when facing emergency situations.
New spectral bands
In terms of spectral bands, Pléiades Neo will acquire simultaneously the panchromatic channels and 6 multispectral bands, which are:
- Deep Blue
- Blue
- Green
- Red
- Red-Edge
- Near Infrared
Red-Edge and Deep-Blue are two additional bands compared to its predecessor Pléiades, which unveil complementary information for respectively vegetation and bathymetry applications.
Finally, the tasking of a VHR satellite orbiting 600 km above the earth has never been easier. OneAtlas, our digital platform, allows the users to draw their AOI, choose Pléiades Neo as optical sensor and choose the date of acquisition while accessing the whole Airbus imagery archive.
By providing more data, more detailed, more rapidly and in a more accessible way, Pléiades Neo becomes the best support numerous markets and in particular European and national Defence and Security missions: from strategic monitoring thanks to increased revisit capability to time sensitive mission preparation thanks to reactive tasking and image delivery.
The Sentinel-3 mission of ESA and the European Commission is one of the elements of the Copernicus programme in response to the requirements for operational and near-real-time monitoring of ocean, land and ice surfaces over a period of 20 years. Its main objectives are to measure sea surface topography, sea and land surface temperature, and ocean and land surface colour with high accuracy and reliability to support ocean forecasting systems, environmental monitoring and climate monitoring. With two optical instruments (SLSTR, OLCI) and the SAR Radar Altimeter (SRAL), accompanied by MWR, DORIS and LRS, these objectives are pursued. Two spacecraft have been launched, model A in 2016 and model B in 2018, and are meeting all their expectations in orbit. The Sentinel-3 mission is jointly operated by ESA and EUMETSAT.
The developments of the recurrent payload models C and D, based on the existing designs, were started in 2016, prior to the launch of Sentinel 3A. The exact launch dates of the C and D models are yet to be formally agreed with the European Commission but are currently planned to take place within the timeframe 2024 to 2028, to ensure a mission continuity of 20 years.
Lessons learned from the previous model developments, new specific requirements (e.g. compatibility with GNSS Galileo bands) as well as the in-orbit commissioning phases have been taken into consideration. Depending on the instrument, this has led to modifications at various development levels: design, manufacturing, assembly, and calibration.
In this paper we present the current status and development highlights of the payload models C&D, the main differences to the previous models, and planned additional activities for further improvements to the mission performance. In the case of SLSTR (Sea Land Surface Temperature Radiometer) and OLCI (Ocean Land Colour Instrument) an overview and comparison of the pre-launch measured performances achieved so far for all models (A, B, C&D) will be presented.
Airbus Intelligence in the UK have been providing Vision-1 Optical VHR data to the commercial market since 2019, and now, Airbus has signed a contract with ESA to ensure that this data can also be provided to ESA Copernicus data users via Additional (ADD) datasets within the ESA Copernicus Programme.
The ESA portfolio of commercial missions contributing to Copernicus (CCM's) is already large, covering SAR and Optical data in seven resolution classes across VHR-1/VHR-2 (resolution " < " 1m and " < " 4m) to LR (resolution " > " 300m). The global appetite for high-quality earth observation data at very high resolution is increasing exponentially, especially in the VHR-1 class (resolution " < " 1m).
Vision-1 imagery consists of 4-band multispectral and panchromatic VHR EO data at a resolution of up to 0.87m acquired by the Surrey Satellite Technology Limited (SSTL) S1-4 Imager. With its recent approval from the Earthnet Data Assessment Pilot (EDAP), and subsequent recommendation for selection as an ESA TPM, Vision-1 has also been accepted into the Copernicus family.
Airbus in the UK has a long history of providing high-quality medium resolution data to global users, commercially and otherwise (e.g. through ESA programmes) via the DMC constellation satellites.
As a VHR optical mission, Vision-1 data can be used across a number of potential applications including precision agriculture, land monitoring, maritime surveillance and infrastructure monitoring, among others.
High resolution monitoring of agricultural fields can give detailed information to farmers over and above the information provided by lower resolution satellites. This can be useful to smallholders, maintaining a large number of smaller fields, as well as providing more detail to larger scale agri-business, allowing them to achieve greater operational efficiencies.
Earth observation data is also becoming an increasingly important tool in the monitoring of ships and other maritime vessels, with a range of applications including fisheries management and monitoring of borders and shipping corridors.
We will present the Vision-1 data offering to the Copernicus Service Providers (CSP's) via the ADD datasets. We will describe the contributions that this mission has already made to the Copernicus CORE VHR_IMAGE_2021 dataset and highlight the potential of Vision-1 imagery to contribute to other Copernicus projects.
As the number of VHR satellite missions grow, so does the level of interest in potential new applications, and we are excited for Vision-1 to become a part of their development via this programme.
With increasing global temperature and growing human population, our home planet is suffering from extreme weather events such as intense rain, floods and droughts and related landslides, rising sea level, and an ever-increasing stress on freshwater availability. While there is a significant body of work on the sources and implications of climate change, analyzing and predicting the impacts and effects on water resources and localized flooding events is still non-trivial. Water resources science is multidisciplinary in nature, and it not only assesses the impact from our changing climate using measurements and modeling, but it also offers science-guided, data-driven decision support. While there have been many advances in the collection of observations, reflected in the fast increase in the Earth Observations archive, as well as in forecast modeling, there is no one measurement or method that can provide all the answers.
The idea behind Digital Twin (DT) is to establish a virtual representation of a system that spans its lifecycle, is updated from real-time data, and uses simulation, machine learning and reasoning to help decision-making. Earth System Digital Twin (ESDT) is an emerging concept that mirrors the Earth Science System to not only understand the current condition of our environment or climate, but also to be able to learn from the environment by analyzing changes and automatically acquire new data to improve its prediction and forecast (Fuller et al. 2020).
The NASA Advanced Information Systems Technology (AIST)’s Integrated Digital Earth Analysis System (IDEAS) project is to establish a comprehensive science platform for our decision makers with science-driven solutions to tackle the global and local impacts due to climate change. For validation and demonstration of IDEAS architecture, the project tackles one of the most fundamental Earth Science challenges related to water cycle science and flood detection and monitoring. As a system of systems, IDEAS brings together some of the advanced technologies and science investments to enable big data analytics, AI/MI predictions, and numerical model simulations from three NASA centers, Jet Propulsion Laboratory (JPL), Goddard Space Flight Center (GSFC), and Langley Research Center (LaRC), along with various observational measurements to enable comprehensive science analysis for actionable predictions. In addition to leveraging NASA technology and data assets, IDEAS is partnering with the Space Climate Observatory (SCO)’s FloodDAM effort for science-driven, federated monitoring, detection and analysis of flood events. As a multi-agency and multi-center Digital Twins effort, the project is tasked to leverage and enhance emerging DT standards to promote interoperability and encapsulation of local infrastructure and technology implementation.
Synthetic Aperture Radar (SAR), with its capability of imaging day or night, ability to penetrate dense cloud cover, and suitability for interferometry, is a robust dataset for event/change monitoring. SAR data can be used to inform decision makers dealing with natural and anthropogenic hazards such as floods, earthquakes, deforestation and glacier movement. However, EO SAR data has only recently become freely available with global coverage, and requires complex processing with specialized software to generate analysis-ready datasets. Furthermore, processing SAR is often resource-intensive, in terms of computing power and memory, and the sheer volume of data available for processing can be overwhelming. For example, ESA's Sentinel-1 has produced ~10PB of data since launch in 2014. Even subsetting the data to a small scientific area of interest can result in many thousands of scenes, which must be processed into an analysis-ready format.
The Alaska Satellite Facility (ASF) Hybrid Pluggable Processing Pipeline (HyP3) was developed to provide cloud-native processing of Sentinel-1 SAR data to the ASF user community at no cost to users. Computing is done in parallel for rapid product generation, easily producing hundreds to thousands of products an hour. HyP3 is integrated directly into Vertex, ASF's primary data discovery tool, so users can easily select an area of interest on the Earth, find available SAR products, and click a button to send them (individually, or as a batch) to HyP3 for Radiometric Terrain Correction (RTC), Interferometric SAR (InSAR), and more. Each process provides options to customize the processing and final output products, and provides metadata-rich, analysis-ready final products to users. In addition to the Vertex user interface, HyP3 provides a RESTful API and a python software developers kit (SDK) to allow programmatic access and the ability to build HyP3 into user workflows.
HyP3 is an open source, and openly developed, processing platform developed for use in the Amazon Web Services (AWS) cloud. It has been designed to have minimal overhead costs (serverless design), be easily deployable using cloud-formation templates (infrastructure as code), and to allow scientists and users to develop new processing plugins. Due to these features, Hyp3’s increasingly been used to provide project (grant) specific processing capabilities not limited to SAR data. These science support projects typically utilize custom deployments into project-based AWS accounts, allowing science teams to quickly and easily develop new algorithms/products, control processing/product access, provide project-based cost accounting, and leverage AWS cloud credits provided by funding agencies, all without needing to be cloud architects/engineers.
The amount of data that must be processed in satellite missions is increasing over time, directly affecting the hardware resources and time required to carry out this processing. With more than 11 years in orbit, the SMOS mission has a lot of over-sampled data, which implies a more intensive use of the CPU and greater use of disk space if the processing is done without any type of data management. For this reason, it is increasingly necessary to optimize the resources involved in the processing of large volumes of data. Such optimizations include minimizing the processing time, achieving maximum efficiency of computational resources, and doing a good management of the generated data, both to make it more accessible and to optimize the disk space it demands.
This work presents different techniques that can be applied when designing software architectures for the particular case of the SMOS Sea Surface Salinity data processing. A study is made on how the data can be aggregated and ordered in the first stages of processing to reduce the processing time of the following stages and the disk usage of intermediate products.
The SMOS measurements can be easily divided into smaller independent processing units (such as a half-orbit or a snapshot, which is even smaller and still independent of the other snapshots). The granularity of the data allows the processing to be divided into very small pieces that can be executed in parallel, making an optimal use of CPU resources and reducing the total processing time. Disk operations, such as reading and writing files are also a big part of the processing time. Data has been arranged in a way in which disk operations are minimized (avoiding multiple reads of the same file) .
Preliminary results show an improvement of the 20% of computational time and a reduction of the 40% of the required disk space with respect to the current implementation of the Barcelona Expert Center internal data processing chain.
Land surface temperature (LST) is widely recognized as an important variable for the description and understanding of surface processes. The temperature of the interface between the soil and the atmosphere is a crucial element of the surface energy balance, determining radiation loss and being closely linked to the partitioning between latent and sensible heat fluxes. As such, satellite-derived LST is being increasingly used in various applications related to the assessment of land surface conditions, including the assessment and improvement of land surface schemes in numerical weather prediction models, in the estimation of evapotranspiration, and in the monitoring of plant water stress or drought extent.
The Landsat series of satellites have the potential to provide LST estimates at high spatial resolution that are particularly appropriate for local and small-scale studies. Numerous LST algorithms for the Landsat series have been proposed. While most algorithms are simple to implement, they require users to provide the necessary input data and calibration coefficients, which are generally not readily available. Some datasets are available online, however, they generally require users to be able to handle large volumes of data. Google Earth Engine (GEE) is an online platform created to allow remote sensing users to easily perform big data analyses without the need for computation resources. All Landsat Level-1 and 2 data are directly available to GEE, including top-of-atmosphere (TOA) and surface reflectance (SR) data. However, until now high resolution LST datasets from Landsat have been unavailable in GEE.
Here we describe a methodology for deriving LST from the Landsat series of satellites (i.e. Landsat 4, 5, 7 and 8) which is fully implemented in GEE. We provide a code repository with all the GEE scripts necessary to compute LSTs from Landsat data. The repository allows users to perform any data analysis they require within GEE without the need to store data locally. The LST is computed using the Statistical Mono-Window (SMW) algorithm developed by the Climate Monitoring Satellite Application Facility (CM-SAF). Besides Landsat data, the LST production code makes use of two other datasets available within GEE: atmospheric data from re-analyses of the National Center for Environmental Prediction (NCEP) and National Center for Atmospheric Research (NCAR) and surface emissivity from the Advanced Spaceborne Thermal Emission and Reflection Radiometer Global Emissivity Database (ASTER GED) developed by the National Aeronautics and Space Administration’s (NASA) Jet Propulsion Laboratory (JPL).
The ever-increasing amount of medium and high spatial resolution satellite data provides new opportunities for analyzing changes in land cover and land surface condition, but large data volumes also present new challenges for scientists and remote sensing analysts. There have been tremendous efforts in recent years to develop tools and infrastructure for processing big Earth Observation data. However, big data workflows are not easily adapted, which can cause a software- and/or hardware-infrastructure lock-in. Satellite data analysis is tightly coupled to both specific input data, processing back-ends and execution environments, which makes their re-use with changing inputs and on different platforms cumbersome. Furthermore, satellite data analysis workflows often include long and complex tasks with heterogeneous resource requirements regarding the computational infrastructure they are run on, which often leads to hard-wired implementations. Alternatively, workflow engines have recently emerged which address the issues of portability (not bound to specific infrastructure), adaptability (automatically adapt to varying infrastructure and data) and dependability (constraints to warrant correct execution) (Leser et al. 2021). Workflow engines are widely used in other computation-heavy sciences such as bioinformatics, but the concept is still new in remote sensing.
The overall goal of our work is to implement and test data analysis workflows for analyzing land cover changes in the workflow engine Nextflow (Di Tommaso et al. 2017). Specifically, the objectives are: (1) to map annual land cover between 2000 and 2020 across Germany using integrated Landsat and Sentinel-2 times series and the harmonized European-wide Land Use and Coverage Area frame Survey (LUCAS) (d’Andrimont et al. 2020) (2) to develop Nextflow workflows that leverage a broad range of existing, already widely used open source tools and programs and (3) to evaluate the execution performance of Nextflow workflows. For preprocessing, we leverage the capabilities of the Framework for Operational Radiometric Correction for Environmental monitoring (FORCE) (Frantz 2019), which includes geometric matching between scenes, atmospheric and terrain correction, cloud masking and BRDF-correction. For further processing, including generation of higher level ARD products, computation of spectral indices/spectral-temporal metrics, training of machine-learning algorithms and map prediction, we focus more closely on other popular tools such as QGIS' (QGIS Development Team 2021) command line-interface, the EnMAP-Box (EnMAP-Box Developers 2019), R/Python extensions and other open source libraries/software. These tools were chosen due to their widespread usage in the EO-community combined with aforementioned benefit of workflow engines to integrate pieces of existing analysis pipelines from various sources. Our approach generates three key results: Firstly, we develop a method to map historic land cover time series from national to continental scale. Secondly, a modular workflow tailored towards analysis of big Earth Observation data with a low barrier for reusability is built. Thirdly, we generate a better understanding of needs specific to diverse remote sensing analysis tasks when implemented in a workflow engine like Nextflow and thus complement existing findings (Lehmann et al. 2021).
References
Leser, U., Hilbrich, M., Draxl, C., Eisert, P., Grunske, L., Hostert, P., Kainmüller, D., Kao, O., Kehr, B., Kehrer, T., Koch, C., Markl, V., Meyerhenke, H., Rabl, T., Reinefeld, A., Reinert, K., Ritter, K., Scheuermann, B., Schintke, F., Schweikardt, N., and Weidlich, M.(2021). The Collaborative Research Center FONDA. Datenbank-Spektrum 1610-1995. doi: 10.1007/s13222-021- 00397-5.
Lehmann, F., Frantz, D., Becker, S., Leser, U., and Hostert, P. (2021). “FORCE on Nextflow: Scalable Analysis of Earth Observation data on Commodity Clusters”. In: Proceedings of the CIKM 2021 Workshops. Online.
Di Tommaso, P., Chatzou, M., Floden, E. W., Barja, P. P., Palumbo, E., and Notredame, C., (2017). Nextflow enables reproducible computational workflows. Nat Biotechnol, 35, 316-319. doi: 10.1038/nbt.3820
d’Andrimont, R., Yordanov, M., Martinez-Sanchez, L., Eiselt, B., Palmieri, A., Dominici, P., Gallego, J., Reuter, H. I., Joebges, C., Lemoine, G., and van der Velde, M. (2020). Harmonised LUCAS in-situ land cover and use database for field surveys from 2006 to 2018 in the European Union. Scientific Data 7.1, p. 352. issn: 2052-4463. doi: 10.1038/s41597-020-00675-z
Frantz, D. (2019). FORCE—Landsat + Sentinel-2 Analysis Ready Data and Beyond. Remote Sensing 11.9, p. 1124. doi: 10.3390/rs11091124.
QGIS Development Team (2021). QGIS Geographic Information System. QGIS Association. https://www.qgis.org .
EnMAP-Box Developers (2019). EnMAP-Box 3 - A QGIS Plugin to process and visualize hyperspectral remote sensing data. https://enmap-box.readthedocs.io
The continuously increasing amount of long-term and of historic data in EO facilities in the form of online datasets and archives makes it necessary to address technologies for the longterm management of these data sets, including their consolidation, preservation, and continuation across multiple missions. The management of long EO data time series of continuing or historic missions, with more than 20 years of data available already today, requires technical solutions and technologies which differ considerably from the ones exploited by existing systems.
The ESA project LOOSE (Technologies for the Management of LOng EO Data Time Series) enables investigating, testing and implementing new technologies to support long time series processing.
For specific tasks (such as ingestion, discovery, access, processing, analysis of EO data) a multitude of completely different mature open source components is usually available. LOOSE aims at combining functionally similar solutions from different heritages into one comprehensive framework. LOOSE even supports parallelism in a way that multiple solutions for the identical task are available and the application developer is invited to chose between these different components during implementation (e. g. "GeoServer" versus "EOXServer").
In addition, LOOSE partners extended well-known existing components with new capabilities (=interfaces) to support efficient ingestion, discovery, exploitation optimized access, processing and optimized analysis of EO data timeseries. For example, GeoServer was extended with the capability to handle STAC metadata.
Overall outcome of the project is a "blueprint architecture concept" which focuses on the interfaces between components and takes innovative concepts such as Bulk data retrieval from dedicated archives, OGC's Data Analysis Processing API and Data Cubes offering Discrete Global Grid Systems into consideration (see enclosed viewgraph).
LOOSE partners are DLR (Oberpfaffenhofen), EOX (Vienna), Terrasigna (Bucharest) and Mundialis (Bonn).
The LOOSE system architecture is inspired by the EO Exploitation Platform Common Architecture (EOEPCA) and focuses on the technological evolution of selected services that enable the end-to-end workflow from retrieving long-term archived EO products to the extraction of high-level information based on processed value-added datasets. The architecture and interoperability is evaluated within LOOSE by using different implementations of these services (e.g. EOxServer and GeoServer) and deploying the whole system on two different infrastructures (DLR/LRZ and Mundi/OTC). The complete LOOSE infrastructure is built on Kubernetes and is therefore well transferrable between different cloud providers.
One of the major goals of the system design (see enclosed figure) is to define services (indicated in blue) as functional components with their internal (purple) and external interfaces (yellow).
The validity of the LOOSE blueprint architecture is demonstrated in three different real-world application pilots.
These applications are covering totally different thematic areas:
- Agricultural monitoring (based on Sentinel-1 and -2 data),
- monitoring urbanization globally (also based on Sentinel-1 and -2) and
- supporting fishery in the Black Sea (multi sensor approach, including in situ-data).
The agriculture use case applies Sentinel-1 and -2 time series in combination with land parcel information to monitor agricultural practices can be monitored and verified, e.g. the presence/absence of mowing in grassland, or the occurrence of ploughing during a specific seasonal time window or the rapid growth of vegetative cover during certain time period. In the context of the European Common Agricultural Policy (CAP), subsidy claims from the farmers require an in-depth check for their eligibility.
This use case specifically requires:
- Handling of and operations on very large vector datasets (filtering, buffering, grouping, merging);
- SAR and optical time series profile extraction through aggregation at land parcel level;
- Implementation of specific eligibility checks according to national CAP and LPIS requirements.
DLR’s World Settlement Footprint (WSF) suite determines built-up areas on the basis of Earth Observation data derived from Sentinel-1 GRD strip map datasets, Sentinel-2 and Landsat 4/5/7/8. Determination of the WSF is performed by evaluating high resolution backscatter ratios between different channels. Timeseries analysis of obtained results is necessary to smooth the computed index values with respect to time and to yield reliable values. This enables to detect the urban growth wordwide.
This pilot application aims at user-driven
- production of the required backscatter indices (for selected periods) performed via eWPS ("black box processing") and
- analysis via Data Analysis and Processing API (DAPA).
LOOSE Marine Pilot will process EO data as well as in-situ and numerical models outputs to accurately identify Potential Fishing Zones (PFZ) around Romanian and Bulgarian coastal areas to support efficient fishery.
In the LOOSE blueprint architecture user-driven "black box" processing is evaluated against user-defined "white box" analyses with respect to useability and performance. "Black box" processing refers to applying a pre-defined retrieval algorithm (processor) on the EO raw data where the user only has limited possibilities to influence the processing settings (such as only selecting the processing time period). "Black box" processing is investigated by using the eWPS, which is provided by the partner Terrasigna. In contrast, white box" processing provides the users the ability to provide an algorithm-graph to the LOOSE system by using the openEO / Actinia / GRASS GIS interface provided by LOOSE partner mundialis.
The LOOSE blueprint architecture concept provides all relevant functionalities (ingestion, discovery, processing, analysis) and can therefore be considered as blueprint for Kubernetes-based operational processing systems.
The JRC Big Data Analytics Platform aims at linking data, data scientists, thematic and policy experts to generate policy relevant insights and foresight. A common denominator of most data analysed in this context is that they refer to a location both in time and space. When it comes to data volumes, the largest share consists of geospatial data in the form of raster images or vector files. Indeed, geospatial data play a fundamental role to answer key societal questions related to our environment at local and global scale when it comes to climate change, biodiversity, deforestation, agriculture, pandemics, etc. In addition, an integrated approach to data analytics is needed to tackle these complex questions in view of determining causal effects between mutually dependent variables. The need for frequent satellite image acquisitions in different spatio-temporal and spectral resolutions to answer these key environmental questions motivated the European Union to launch the Copernicus programme that is now delivering a continuous stream of free, full, and open data and largely contributed to Earth Observation entering the big data era.
In this contribution, we will present how the JRC Earth Observation Data and Processing Platform (JEODPP) evolved into a multi-purpose data infrastructure (called Big Data Anatlytics Platform, BDAP) serving the needs of the Joint Research Centre across all its knowledge production and knowledge management activities. This evolution was enabled thanks to the versatility of the services of the JEODPP. The presentation will focus on the following key ingredients required for addressing the challenges posed by the effective and efficient extraction of insights and foresight from geospatial data and data associated with a location both in space and time:
- Advanced data cataloging and data access following FAIR data principles including their application to analysis pipelines/workflows;
- MLOps for Machine Learning Operations;
- Interactive data science for fast prototyping;
- Scalability and flexibility for analysis at any scale;
- Exploratory visualisation for both data discovery and dissemination of insights to non-experts;
- Open-source coding for transparency, reproducibility, and accountability;
- Collaborative tools to ensure that not only data but also scientists are linked.
The successful combination of these ingredients will be illustrated with a series of actual use-cases from continental to global scale combining heterogeneous data from multiple sources as well as generic services such as interactive dashboards for exploratory and agile visualisation based on Voila extension of JupyterLab and the pyjeo open source library for geospatial data analysis.
The talk will conclude on future perspectives in the framework of ScienceMesh for the European Open Science Cloud currently developed in the CS3MESH4EOCS project coordinated by CERN with participation of numerous partners including JRC.
In recent decades, the increasing availability of orbiting satellites and Earth Observation data has favored the development of a large number of applications in a wide range of fields, from monitoring environmental changes to the identification of pollutants, from the study of the interaction between ecosystems to the prevention of and response to natural disasters. Many applications use Earth Observation data from different satellite missions, exploiting their potential and synergy. In this context, in order to meet the often stringent requirements, Earth Observation instruments must be able to ensure the most accurate, reliable and consistent measurements throughout the mission.
Moreover, the combined and synergistic use of data from various missions becomes essential, thus requiring precise co-registration and inter-calibration operations between instruments to normalise the response of the different sensors on the basis of a common reference.
Planetek provides the possibility to correct residual geometrical deformation and radiometric inaccuracies in optical, multi-spectral, images by means of a knowledge base with a worldwide coverage. This base is built thanks to the combined usage and the fusion of satellite data from different missions and relies on accurate information automatically extracted, and regularly updated, in specific “ground control truths”. The control points are characterized by the precise knowledge of their geometrical position or radiometric response.
The service exploits the information extracted by long time series to increase the geometric precision and radiometric stability. It is provided on a cloud infrastructure and can be integrated in any standard mission payload data ground segments workflow
In September 2020, ESA launched a new Virtual Lab focusing on Agriculture (AVL). Virtual labs are platform services for scientists to share data resources and create an enhanced research environment. AVL is designed to be an online community open science tool to share results, knowledge and resources. Agriculture scientists can access and share Earth Observation (EO) data, high-level products, in-situ data, as well as open-source code (algorithms, models, tools) to carry out scientific studies and projects.
The technical system behind the AVL comprises two main building blocks, namely the “Thematic processing subsystem” powered by TAO (Tool Augmentation by user enhancements and Orchestration), which is an orchestration and integration framework for remote sensing processing, and the “Exploitation Subsystem” powered by xcube and Sentinel Hub, a software for generation, management, exploitation, and service provisioning of analysis-ready data cubes.
The “Thematic processing subsystem” is a collection of self-contained (i.e., packed in Docker containers) applications or systems, that produce value-added EO products such as biophysical variables, crop masks, crop types, etc. It integrates commonly used toolboxes (e.g., SNAP, Orfeo Toolbox, GDAL, Sen2-Agri, Sen4CAP, etc.) into a single environment enabling end-users to define by themselves processing workflows and to easily integrate additional processing modules.
The “Exploitation subsystem” ingests data streams including the ones provided by the Thematic processing subsystem and makes them available as analysis-ready data cubes. Data streams may be gridded, like EO sensor data or model data, or feature data, like time series of points or shapes. The latter are stored in geoDB, a database for various data types with geographical context. The “Exploitation subsystem” provides users with individual workspaces and offers different interfaces, specifically the data cube toolbox Cate, a Jupyter Lab environment, and the interface to the thematic processing subsystem.
The implementation of the AVL system is following an agile approach, prepared to account for new requirements, particularly from relevant users from the agriculture science community. With respect to the onboarding of users, the project is structured into three phases. First, a couple of well-defined user stories provide the requirements for the implementation of the first use cases via iterative development cycles. These use cases are executed in partnership with Champion Users who are leading scientists belonging to the community and/or international stakeholders (JECAM, GEOGLAM, CGIAR, GEWEX, FAO, GEO).
The first use case is about the portability of classification models in space (i.e. from one region to another) and over time (i.e. from one year to another), which would certainly be one of the best options to deal with the in-situ data scarcity. Different methodologies to transfer the classification models exist: identifying and using invariant features that are valid in the source and target domains, aligning the time series between the two domains (using for instance time warping), training the classification model by using (i) data from source and target domains together or (ii) data only from the source and then adapt the model to the target domain by fine-tuning it on the available target train data. These options are evaluated over two test sites in Belgium and France and two years 2019 and 2020, based on Sentinel-1 and Sentinel-2 sensors and in situ data coming from the French and Belgian Land Parcel Identification System datasets. This first use case can then be expanded over more sites, involving the JECAM community.
The second use case is about the monitoring of sustainable agricultural practices supporting the necessary evolution of agriculture to become more compatible with the expectations of society at large and with the Green Deal ambitions at the European level. Within this use case, crop-specific monitoring at field-level throughout the year is carried out to monitor a selection of sustainable agricultural practices: winter cover crop and biomass indication, harvest/destruction detection, bare soil period detection, evapotranspiration retrieval as an indicator of water stress.
The third use case will be either about the estimation and forecast of crop yield or an inter-comparison exercise of crop maps within the GEOGLAM initiative.
The second development phase will involve Early Adopters as the first external scientists using and testing the AVL. While the advanced science use cases cover hot topics in the Agriculture Science to maximize the impact in terms of AVL in the community, the Early Adopters studies will demonstrate that AVL can be useful for a variety of applications, offering a large diversity and huge amount of input EO and non-EO data and providing a unique and innovative collaborative framework to access, process and visualize these data. Their feedback will support the transition to the third, operational phase, which will open the AVL to the wider scientific community.
One of the keys to the AVL's success will be the data offer - satellite data, in-situ data, thematic products and auxiliary data. A comprehensive user survey has been conducted in the first months of the project, identifying the users’ priorities and maximizing the offer of relevant data for the agriculture science community will remain a focus throughout the project. Furthermore, as Open Science project, the AVL will promote and foster the collaboration between scientists and the sharing of data, products, results and source code (joint publications, inter-comparison exercise, benchmarking, etc.). Specific activities will also be carried out to build a strong AVL user community and facilitate Open Science, such as regular webinars, a dedicated forum, and the organization of hackathons or competitions.
Sentinel-3 (S3) will be flying in tandem with the photosynthesis mission FLEX, and therefore information captured from this satellite related to the status of vegetation will be crucial. Given the opportunities that cloud computing platforms offer for processing Earth observation data, we chose Google Earth Engine (GEE) to develop a workflow for spatiotemporal mapping of vegetation over Europe. GEE hosts a multi-petabyte data catalog with parallel computation service, manageable from an easy front-end interface. We used the machine learning method Gaussian processes regression (GPR) to train and implement hybrid retrieval models into GEE. By default, GPR is not part of GEE and thus adaptations have been implemented according to the procedure described in Pipia et al, (2021). GPR has been proven to be an outstanding method for prediction tasks, excelling for its ability to provide uncertainties on the predictions. In this way, an assessment of quality can be implicitly performed.
GPR retrieval models for the following key variables were developed: leaf chlorophyll content (LCC), leaf area index (LAI), fraction of absorbed photosynthetically active radiation (FAPAR) and fraction vegetation cover (FVC). For training the models, we used a simulated top-of-atmosphere (TOA) radiance dataset, upscaled from top-of-canopy (TOC) reflectance with the 6SV radiative transfer model (RTM). Simulations at TOC were performed with SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes) over a wide range of vegetation properties, driving to hybrid models that take advantage of physical principles of RTM with the efficient computational performance of machine learning. The TOA radiance simulations were then resampled to the band settings of S3-OLCI. After tuning and improving, the final models (v.1.1) were evaluated with a R2 ranging between 0.60 (LAI) and 0.99 (FVC).
By implementing the final models into GEE for processing S3-OLCI TOA data, monthly averaged maps and spatial-averaged time series were generated, and have been compared against matching vegetation products coming from the CGLS (Copernicus Global Land Service) and LPDAAC-NASA (Land Processes Distributed Active Archive Center). Our retrieved outputs present spatial and temporal patterns close to the official products. FVC and FAPAR were retrieved most robustly when analyzing the associated uncertainties (absolute deviations constrained in the range 0 up to 0.3 on the scale 0-1).
The obtained maps show different ecoregions of Europe: natural areas are distinguished with expected values ranges. Time series were calculated as spatial averages over delimited land covers according to the Corine classification for the time window between April 2016 to November 2020. They also display consistent vegetation temporal patterns, with peaks reached during Spring-Summer, depending on the land cover type. In order to generate continuous time series from discontinuous data , e.g. due to cloud cover, we applied the same GPR principles in the temporal domain, making predictions based on the existing data time series.
A challenge in this work was to define the optimal dataset for training a smoothed model avoiding to incur in memory problems during large dimension matrix operations, at the same time that large variability needs to be covered when working at continental scale. A question that opens a future line of work in optimization problems for mapping at large scale, particularly when using computational resources with a limited quota on GEE. At the same time, this work opens opportunities for global monitoring and facilitates to correctly interpret the fluorescence signal acquired by the upcoming FLEX mission.
In the new mega-constellation paradigm for Earth Observation, the objective is to maximize payload downlink while maintaining consistent communications with the entire constellation. This is a challenging landscape as the number of satellites far exceeds the number of antennas. Planet’s Dove constellation is a clear example of this, with more than 180 imaging satellites and a network of around 15 downlink capable antennas. In this context, and given the fact that satellite downlinks cannot happen in parallel if using the same antenna, deciding on which contacts get allocated requires intelligent problem formulations. Of the multiple objectives that can be optimized when configuring the schedule of the fleet, data down is a complex metric, because the amount of data that each satellite has stored on board at the time of the potential downlink depends on multiple things: its previous imaging and downlink activities, its storage capabilities and the data rates that it can achieve. In this presentation, we will describe the model used by Planet to take all these aspects into account and optimize the schedules of the satellites to maximize the amount of data that is downlinked, making the most of the ground network.
The Food Security Thematic Exploitation Platform (FS-TEP) provides a platform for the extraction of information from EO data for services in the food security and agriculture sector to support sustainable food production from space. The platform addresses a wide-ranging community, including industry, service providers, developers, scientists and public sector or governmental organizations. The technical infrastructure is a web-based Platform-as-a-Service (foodsecurity-tep.net/app), developed by CGI Italy, which leverages cloud computing technologies on the background of the CREODIAS. Acting as an interface between Copernicus (and complementary) EO data and the user community and providing the technical solutions for developing and operating services via an Open Expert Interface, the Food Security TEP is attractive for new enhancements and developments from science and data analytics.
Within the last year, evolutions of platform functionalities have been carried out, connecting with international projects, initiatives, and companies. As one major milestone, the integration of the Deep Learning platform Hopsworks (https://www.hopsworks.ai/) to the Food Security TEP was achieved. This open-source platform for Data-Intensive AI provides an environment to create, reuse, share, and govern features for AI, as well as manage the entire AI data lifecycle in the development and operation of machine learning models and includes a Feature store. Via federation between the Food Security TEP and Hopsworks, the full breadth of Copernicus EO data and Copernicus Services products available on CREODIAS, GPUs for the training phase of machine learning, and a scalable computational environment for running operational algorithms - after they have been trained through machine learning - are offered to the EO and science community.
Stimulated the Horizon 2020 Extreme Earth project and under support of the CCN evolutions of the ESA contract, the Food Security TEP and Hopsworks capabilities, using e.g. Sentinel-2 time series, machine learning, and applications concerning crop type mapping and crop monitoring have been successfully developed and demonstrated. The aim was – not only in this pilot - to enable data scientists and service providers working with Earth Observation data to make use of the full spectrum of big data processing, machine learning tools and deep learning architectures, to provide information of high relevance and usability in the agricultural / food-security sector.
As a pilot application in the context of the Extreme Earth project, the team set up a service chain for larger parts of the Danube catchment, applying multi-year Crop Type Mapping, using pre-trained deep LSTM and Sentinel 2 Image Time Series, to provide crop type information in practical details according to farming practice. Those datasets have been used for the precise assessment of the water stress and irrigation demands, based on LAI time series and crop growth simulations.
An amount of more than 5.500 Sentinel-2 L1B datasets, running through an automatic pre-processing on the Food Security TEP, including atmospheric correction, cloud and cloud shadow masking, have been provided as input to a) the training steps, and later on in larger extend to b) the crop type mapping for the seasons 2018, 2019 and 2020. In preparation of the model inference / the application of the Hopsworks trained (using GPU, and INVEKOS / LPIS crop information) classification algorithm, the single time series data of Sentinel-2 data have been converted to monthly composites and stored in a collection. The classification performance (using CPU processing on the FS-TEP) was accomplished by retrieving the classification model from Hopsworks (at a Creodias installation) and the final classification process is done on FS-TEP again. Results were re-processed to crop map raster data (as tif). The produced crop information, derived for 16 classes, e.g. differentiating maize, soy, barley, rye, rapeseed, spring cereals, winter wheat, etc., have additionally been validated against LUCAS (Land Use and Coverage Area frame Survey) database information.
The steps of retrieving models from Hopsworks and running the model inference as a Food Security TEP processors, had been designed as universal components, to be integrated and applied on future service implementations. Having designed the integration between EO processing and AI trained model inference in a universal way, the transfer to other applications using Extreme Analytics on the Food Security TEP are now enabled and will promoted. Developments and services on and from the Food Security TEP can be performed on commercial basis or via a sponsorship using the Network of Resources (NoR) from ESA.
Within the presentation, the concept, the design of Hopsworks federation with Food Security TEP services, and along the example application of crop type mapping, and the general new capabilities in Extreme Analytics for food security topics will be given.
The Food Security-TEP is funded by ESA under contract nr. 4000120074/17/I-EF; Extreme Earth was funded by European Union’s Horizon 2020 research and innovation programme under grant agreement No. 825258
Platforms for the Exploitation of Earth Observation data have been developed by public and private companies to foster the usage of EO data and expand the market of Earth Observation-derived information. All platforms have their user communities, but we are still in the pioneering phase rather than at mainstream usage level. This presentation will discuss which obstacles need to be tackled to boost platform usage on the federation level across platforms. The federation perspective is crucial as many major challenges require the integration of data or services from several platforms. As an example, more and more Disasters are linked to climate change, climate change impacts infrastructure, infrastructure again is linked to land use and land use is linked to public health. Currently the data related to all these topics are fragmented over several infrastructures and it is very likely that they will never be available on a unique one. A federation of platforms will not only enable technical interoperability, but also the multidisciplinary cooperation that is so critical to Climate change impacts management for instance.
This is particularly true in Europe where the resources are quite fragmented and approaches such as the Network of Resources and common design principles such as Earth Observation Exploitation Platform Common Architecture (EOEPCA) have great potential to help growing user communities, as they promise relevant resources at hand, interoperability between platforms, and hidden complexity to allow existing and new users to focus on their challenges rather than technology. So how do we make the most of our platforms? How do we grow them towards mainstream use?
This presentation will explore what progress has been made over the last four years within the OGC geospatial community and describe how these experiences need to be further aligned with developments in neighbouring disciplines such as climate change or disaster management. It further analyzes how platform developments and international efforts such as DestinE, GaiaX, COPERNICUS and Data Spaces need to evolve in order to create an efficient and functional platform environment.
Cloud-based services introduce a paradigm shift in the way users will access, process and analyse Big Earth data in the future. A key challenge is to align the current state of how users work with the data with the general trend by the data providers to guide users towards cloud-based services. Due to the increased availability of Big Earth data, a more diverse user base wants to take advantage of it leading to a diversity of new user requirements.
In order to get a better insight in the requirements and challenges of current and new users of Big Earth data regarding cloud services and to better sense the motivation of those to migrate to it, we conducted a comprehensive web-based user survey. Our results, with a focus on users of Big Earth data in Europe and North America, reveal that a majority of survey respondents still download data onto their local machine as well as handle and process data locally with a combination of programming and desktop-based software. In this context, survey respondents are facing severe problems related to the growing data volumes, the data heterogeneity and the limited processing capacities to cope with their demanding applications. Even though survey respondents show a specific interest in using cloud-based data services in the near future, survey outcomes reveal a low literacy in cloud systems and a lack of trust due to security concerns as well as an opacity of costs incurred.
Based on the survey findings, we see a strong need to establish an international consortium among Earth data organisations and cloud providers to make the current Big Earth data landscape more FAIR (findable, accessible, interoperable and re-usable). We specifically propose four key areas of activities: (i) to bring together Big Earth data and cloud-service provider to foster collaboration towards interoperability of cloud-based services, (ii) to define best practices and identify existing gaps on interoperability of cloud-based services, (iii) to develop and implement a quality certification for cloud-based services to build up trust in cloud service use and (iv) to coordinate capacity-building with the aim to build up cloud literacy, technical competencies and to foster adoption.
The Food Security Thematic Exploitation Platform (TEP) provides a platform for the extraction of information from EO data for services in the food security sector mainly in Europe & Africa. It went operational in 2019 offering a range of data sets and tools to foster smart, data-intensive agricultural and aquacultural applications in the scientific, private and public domain. The initial focus was set to offer satellite data archives fitting agricultural needs, complimentary datasets relevant for agriculture, standard tools for EO processing and analysis (e.g. toolboxes and environments) as well as dedicated services, pre-processed satellite products like leaf area, chlorophyll content of crops and last but not least, a simplified Docker developer interface to work with own algorithms and scripts. Options to share and publish data, as well as accounting options (TEP coins), provide capabilities for the interaction with colleagues, users and customers. The services of the Food Security TEP have also been made available through the Network of Resources (NoR), so sponsorship for scientists and developers can be requested.
In the past year, the Food Security TEP team implemented several additional evolutions, which bring new relevant tools for agricultural analyses to the platform:
• Integration of Sen4Cap: The Sentinels for Common Agricultural Policy - Sen4CAP project aims at providing to the European and national stakeholders of the CAP validated algorithms, products, workflows and best practices for agriculture monitoring relevant for the management of the CAP. The integration of the Sen4Cap framework into the Food Security TEP platform allows to exploit its capability of processing large scale of data with a data driven approach, which in combination with its processing and analytics enables the community to produce well-tested large-scale agricultural analyses without having to implement their own algorithms. Food Security TEP now offers the ability to discover and invoke the following integrated Sen4Cap processors: Sentinel-1 and Sentinel-2 pre-processing, LPIS (parcel) Data Preparation, L4A Croptype, L4B Grassland mowing.
• Federation with PROBA-V MEP, example crop calendar service: During the GEOGLAM Executive Board meeting from October 2019, the EAV (Essential Agricultural Variable) crop calendar was identified as a suitable showcase, given its relevance in many monitoring systems, to provide improved indicators for agricultural monitoring in an operational setting. The crop calendar was developed in the frame of the VITO E-SHAPE project, demonstrating its capabilities in providing reliable crop calendar information at a high spatial resolution, using Sentinel-1 and Sentinel-2 imagery. The crop calendar is deployed as a service on the PROBA-V MEP platform and since 2021 is now also discoverable from the Food Security TEP. Food Security TEP user can access the service from the TEP and execute it as standard WPS-process. Users can visualize and exploit processing results provided by the PROBA-V MEP service directly from the Food Security-TEP Web Interface, download the processing results produced by the PROBA-V MEP service for an offline post-processing activity, and share their processing results with other Users of the platform. Other services offered on the PROBA-V MEP are now also discoverable in the same fashion on the Food Security TEP.
• Integration of AI Tools: New methods of information gain and processing of EO data are becoming ever more popular in agricultural analyses. With the increasing capabilities of Deep Learning (DL), Machine Learning (ML) and other advanced methods of AI, new options for applications in EO analysis arise. Results of computer-based Image Recognition (using convolutional neural networks) exceed the results of human performance since about 2015 and may be the fastest paradigm shift in technology history. Hence, Food Security TEP has integrated a state-of-the-art Enterprise Platform, Hopsworks which in turn integrates popular platforms for data processing such as Apache Spark, TensorFlow, Hops Hadoop, Kafka, and many others, that is used by numerous experts and developers in Europe. Users of Hopsworks are enabled to design, run and implement their AI training, analysis and service models in a scalable way, but Hopsworks itself doesn’t offer fast satellite data access. Due to the explicit interfacing between Hopsworks and the Food Security-TEP, a best possible effectiveness of EO data exploitation in agriculture with ML/DL techniques can be achieved.
• Possibility to monitor events: Many of the Food Security-TEP’s functionalities are focusing on algorithm deployment, with little emphasis on visualisation in the platform. The Event Monitor changes this. With this new feature, users can quickly discover and browse Events, quickly browse Events contents, easily visualize time series of products and combine different service output into a single operational view. Food Security TEP already provided the possibility to define different systematic processing templates which combine different services which will run in a data driven or time driven logic. Based on these templates users can now define a new monitor activity by selecting the template, an AOI and TOI, and customize the template for the input (e.g., changing the cloud coverage). The Monitor supports the visualization of a monitored Event by collecting all the relevant information associated to the instance of the systematic processing, allowing for a quick and easy timeseries overview.
• Integration of thermal data: fast-track access to data and products of NASA’s ECOSTRESS mission (ECOsystem Spaceborne Thermal Radiometer Experiment on Space Station) has been implemented. This data access is part of the European Ecostress HUB (EEH), a project funded by ESA in support of the Copernicus Land Surface Temperature Monitoring (LSTM) High Priority Candidate Mission and implemented by the Luxembourg Institute of Science and Technology. At EEH, ECOSTRESS data is searchable for user defined area of interest and start/end dates. Datasets cover seven products, level 1 (RAD, GEO), level 2 (LSTE and Cloud Mask), level 3 (ET) and level 4 (WUE, ESI). The EEH data repository build the basis to test and develop dedicated land surface temperature (LST) and evapotranspiration (ET) retrievals with user exchangeable algorithms and auxiliary data on a cloud platform in preparation of the LSTM mission. Within the EEH project, the Temperature Emissivity Separation (TES) algorithm and the Generalized Split Window (GSW) for LST and three different models (STIC, TSEB and SEBS) for ET will be implemented and made available through the Food Security-TEP.
In 2022, further enhancements to the platform services are planned. While the final decision on individual enhancements is not made yet, these evolutions will also focus on bringing new functionalities (e.g. enhanced API, ability to use GPUs) and new data sets (e.g. hyperspectral data, agricultural training data sets for AI, scientific results like global water use efficiency) to the platform, to open up even more agricultural applications and allow users to built their own services from a unique set of tools and data sets related to food security.
The Food Security TEP evolutions are funded by ESA under Contract No. 4000120074/17/I-EF. Integration of the ECOSTRESS data integration was sponsored by ESA through the NoR.
Global Earth Monitor (GEM; funded by H2020) takes advantage of the large volumes of available EO and non-EO data to establish economically viable continuous monitoring of the Earth, driven by the dynamic transition between "strip mode" and "spot mode" monitoring. GEM’s approach is based on the drill down mechanism: fast (and cheap) global processing at low spatial resolution, finding the areas of interest (AOI) where it triggers spot monitoring with (appropriately) high spatial resolution data and more elaborate machine learning (ML) models. Such processes can run continuously on a monthly, weekly, or even daily basis provided they work in a sustainable way - adding more value than their cost - at least on a continental if not global scale, able to automatically improve accuracy and detect changes as they occur.
The GEM consortium is formed by Sinergise (the coordinator), one of the key enablers of the uptake of Copernicus data through its well-known Sentinel Hub services; the European Union Satellite Centre (SatCen), one of the main European Institution of the Space and Security domain; meteoblue, a first-class weather services provider offering weather predictions at global scale on scales not familiar before from other weather services; TomTom, a well-recognized industry leader in location technologies; the Technische Universität Munchen (TUM), a research institution playing a vital role in Europe’s technological leadership. Each partner is in charge of implementing one use case, while TUM has a transversal role to support the development of AI tools relevant for the different use cases activity.
Long temporal series over very small areas (e.g., agricultural fields) and large scale (global) mosaics over shorter time stacks are two orthogonal use-cases when striving for efficient EO data retrieval. The concept of adjustable Data Cubes (aDC) addresses the two: a service capable of preparing the data in a way that the user needs it in her downstream pipelines and applications in a scalable and cost-effective way. At GEM, Sentinel Hub services are trying to address precisely that: cover both (corner) cases of data retrieval from the perspective of scalable and cost-optimised infrastructure. When coupled with the available data collections, the advantage of adjustable data cubes and analysis ready data (ARD) processing chains is enormous. Users can delegate the heavy machinery and processing of complex calculations (see e.g., custom scripts repository [1]) of large scale (mosaic) processing to the BatchAPI and feed the results into their own pipelines. The StatisticalAPI, and the upcoming Batch Statistical API, are preparing the aDC ARD for the other extreme: fast retrieval of long time series statistical variables (mean, min, max, std, percentiles, histograms) over AOIs, allowing for e.g., development of vegetation index time series of an agriculture parcel.
The EO industry seems to be evolving into two distinct branches: “we provide/sell data and data products” or “we provide a platform where users can build their own bespoke products”. GEM project tries to balance the two, leveraging the access to the data through the services, and providing users with open-source ways to build their own products using open-source eo-learn [2].
The development of scalable and cost-effective solutions is being tested on several use-cases. Built-up area use case identifies new built-up areas using the drill-down method. It exploits the Global Mosaic ARD cube of Sentinel-2 data at 120 m resolution as a starting point and, after fast detection of built-up areas at that resolution, runs the process at 10 m resolution to classify artificial surfaces and to detect changes. At that point, very high-resolution imagery is used to detect buildings. A Conflict Pre-Warning (CPW) Map use-case will provide a new security product to support decision-making. It analyses correlations between global climate changes and environmental issues with human activity behaviours, in support to guaranteeing the security of citizens. Automatic crop identification uses a combination of EO and weather data to enable automatic identification of crop growth stage. The use case supports operational decisions when managing crops and the quantitative monitoring of actual vs. planned or reported land use (production forecast). Map-Making Support use-case will integrate Land Cover services to perform a fully automated and repeatable global land cover mapping for small-mid scale and optimised land cover map at large scale (change detection functionality).
Within all use-cases we make use of the big data functionalities of Sentinel Hub for two purposes. Firstly, to showcase the capabilities of the increased performance, cost-effectiveness and scalability of services and framework for continuous monitoring using drill-down mechanism. And secondly, to demonstrate the adoption of use-case results for the decision making within industrial (e.g., map making), societal (e.g., conflict pre-warning maps) and other domains (e.g., crop identification for common agricultural policy).
The this talk we will provide an overview of the tools and use-cases developed within GEM, showcasing the bigdata capabilities of the services and their integration with eo-learn.
Earth observation (EO) data cubes have removed many obstacles to accessing data and deploying algorithms at scale. Sill, developing algorithms or training machine-learning models is time-consuming, usually limited in the application scope, and requires users to have many skills, including EO and technology-related ones. By integrating semantically enriched EO data cubes (i.e., semantic EO data cubes) and graphical semantic querying, we aim to remove this burden from users.
A semantic EO data cube is an EO data cube, where for each observation at least one nominal (i.e., categorical) interpretation is available and can be queried in the same instance. Such an interpretation can be general-purpose, user- and application-independent information layers derived as spectral categories from the reflectance values of optical EO images. The spectral categories are considered as colour properties of land cover classes (i.e., semantic entities) and only the first and initial step towards a scene classification map, whose classes require adding more information, e.g., the temporal variation, texture, or topographic features. Thus, a graphical Web application, specifically designed to allow semantic analysis of EO data, allows encoding a-priori knowledge using transferrable, replicable, and explainable semantic models to produce information in a convergence-of-evidence-approach.
At the core of our approach is semantic enrichment to generate data-derived spectral categories, scalable cloud-based technology for data management, graphical semantic models to formulate rule-based queries as close to domain language as possible, and an inference language capable of processing semantic queries by translating semantic models into execution steps.
We implemented our architecture as a scalable infrastructure for spatio-temporal semantic analyses of Sentinel-2 for Austria. The semantic enrichment is conducted using the SIAM software (Satellite Image Automated Mapper) in our implementation. SIAM outputs information layers based on a knowledge-based and physical-model-based decision tree that can be executed fully automated, is applicable worldwide and does not require any samples. The input is any multispectral EO image that was calibrated at least to top-of-atmosphere reflectance. Users can formulate semantic concepts using the spectral categories as colour information in a graphical convergence-of-evidence approach. The semantic EO data cube provides access to the images and information layers and is based on a scalable, containerised instantiation of the Open Data Cube (ODC). The ODC provides the Python application programming interface (API) that is used by the inference engine to obtain the data to conduct inferences by applying the user-defined semantic models.
Our infrastructure facilitates analyses and queries, including semantic content-based image retrieval (a method to filter images based on their content), on-demand parcel- and location-based analysis using semantic categories and optionally excluding clouds, or composites with custom best-pixel-selection (e.g., cloud-free, (non-)vegetated, (non-)flooded) in user-defined area-of-interest and time intervals.
The amount of available EO Data is constantly growing over time. Only considering Copernicus missions, the volume of data has reached 30PB+ and the number of individual products 45M+ in 2021 (source: https://scihub.copernicus.eu). Processing this volume of EO Data requires Well-Architected software platforms, with a primary focus on scalability. Cloud-based environments and technologies are enablers for these kinds of platforms.
EOPaaS is a Platform as a Service for Earth Observation, enabling its users to process data at scale. It is built on top of a Cloud-native microservices architecture that can run on top of a variety of public and private Cloud infrastructures, such as AWS, GCP, Azure, OVH, CREODIAS, ONDA DIAS. The main strengths and peculiarities of EOPaaS are:
- To be Cloud-agnostic, being designed to rely on a well-defined set of standard Cloud APIs, it allows EOPaaS to run on any Cloud provider exposing standard interfaces
- To be Data-agnostic, with the possibility to process and publish raster and vectorial data in different formats from different satellites (e.g., imagery and video)
- To be algorithm-agnostic, being able to integrate any processor by the availability of its container logic and APIs
These features made the platform able to serve a continuously growing demand for processing capacity and also, thanks to the exploitation of cloud-native microservices architecture the platform optimises costs and resource usage when the demand slows down, leveraging well-known products like Argo Workflows as Workflow Orchestrator and the Cluster Autoscaler.
EOPaaS currently supports several ESA-funded initiatives addressing challenges that have been identified by different communities.
EOPaaS was initially developed for the Food-security TEP, built on top of the heritage of the Forestry TEP and enhances it with additional functions such as auto-scaling as well as the capability of performing orchestrated workflows for large data processing. It also provides further development of the user interface, in particular for users accessing via mobile devices. Recently the platform was enhanced for the community by a federation with the AI platform “Hopsworks” to address AI challenges in the Food Security domain, also the platform dataset was enlarged with the ECOSTRESS Dataset to be offered to the scientific community.
In the context of the OGEO-REP (Oil and Gas Industry Earth Observation Response Portal), EOPaaS provides a new concept of event-driven processing to enable bundles of processing services configured together with a few common parameters in response to an event to support the needs of both onshore and offshore Oil Spill Response users. EOPaaS was enhanced with a new component aiming to provide to the end-users a common operating picture, analyzing together heterogeneous data sources coming from different services.
In the context of the Vantage (Video Exploitation Platform) project developed the platform was offered for exchanging data tools and algorithms around exploitation and visualization of EO video data generated by Earth-I.
In the BULPP (Parallel Computing using GPUs) project EOPaaS aims to demonstrate a bulk processing framework based on parallel computing, considering the scalability and portability of the parallel processing infrastructure, the computational cost (focusing on the computationally most demanding processing steps), and variety: different sensor types (optical, SAR), but also different processing levels (low-level processing vs. complex analytics).
EOPaaS also support projects for private companies in the Energy and Manufacturing sectors and keeps evolving towards the EO Exploitation Platform Common Architecture guidelines (https://eoepca.github.io/), and already supporting standard OGC APIs (OpenSearch, WPS 2.0).
The objective of the presentation will be to describe the different challenges that the platform has faced during the different projects and how these have been addressed in order to achieve end-user goals.
Every two years the Satellite Needs Working Group (SNWG, an initiative of the U.S. Group on Earth Observations, USGEO, mandated by the White House’s Office of Management and Budget) performs a survey among the US federal agencies to identify the most needed remote sensing observations to support their highest-priority activities. NASA supports the SNWG by developing remote sensing products that address data gaps identified by the SNWG. In response to the requirements identified by the SNWG 2018 cycle, the JPL OPERA (Observational Products for End-Users from Remote Sensing Analysis) project has been funded to develop and implement three products: (1) a near-Global Dynamic Surface Water Extent Product from optical and Synthetic Aperture Radar (SAR) data; (2) a near-Global Land Surface Change Product from optical data; and (3) a North America Land Surface Deformation Product from SAR data. The source of the optical data is the harmonized Landsat-8 and Sentinel-2A/B (HLS) satellite products. The source of the radar data comprises Sentinel-1 A/B, NISAR, and SWOT data products. In addition to the three output products identified by SNWG, two intermediate products will also be produced: (1) a North America Land coregistered Single Look Complex (CSLC) stack product for all interferometric radar data and (2) a near-Global land surface Radiometric Terrain Corrected (RTC) product derived from the SAR data. OPERA’s current scope of work provides operational funding until the end of FY 2025, with the various products delivered to and distributed by three NASA Distributed Active Archive Centers (DAACs).
In this presentation we will discuss the planned characteristics of all OPERA products. We will also provide information on processing and product calibration/validation activities, introduce the OPERA Stakeholder Engagement Program, and summarize the timeline of product development, cal/val, and release. Special focus will be put on the optical water and optical disturbance products aimed to be released to the community by March 2023 and September 2023, respectively.
The CODE-DE (Copernicus Data and Exploitation Platform – Germany) cloud is designed to specifically fulfill the usage demands for public authorities. This means explicitly a well-structured web presence with a low complexity entry level, high usability and user friendliness. This involves data browse and view services, but also that registered users can trigger simply by mouse click pre implemented demand
web-based data processing trees, such as Sentnel-1 interferometry or a biophysical data processing flow for Senentinel-2 (Leaf Area Index, Fractional Vegetation Cover and FAPAR) form SNAP. There are other ready to use data products for the user’s convenience available, such as monthly composites of Sentinel-1 and 2 imagery. All Copernicus data for Germany are available locally on the servers in Frankfurt (Germany), with a global data access via the CreoDIAS data catalog, access to all Copernicus services and the Copernicus Contributing Missions (CCM).
CODE-DE is a hybrid cloud that allows web-access to the data for viewing and download, but at the same time data processing facilities via virtual working environments and Jupyter Lab. Processing data locally in the cloud is a great benefit for a large number of CODE-DE users ( “bringing the user to the data”) as there is a general lack of in-house computing power for a number of public authorities. IT security standards of ISO 27001 and C5 are meet by CODE-DE as a perquisite for this user group. The data services include two different data cube concepts and lately a suite of Graphical Processing Units (GPU) for Artificial Intelligence (AI) applications, deep learning and computer vision were implemented (Infrastructure-as-a-Service).
CODE-DE is free to use for national public authorities and part of the national Copernicus strategy. The intention of running this platform is also to increase the Copernicus data user uptake, foster the use of Copernicus data and implement downstream services. The usage of the processing facilities is quota based and also open to other users through ESA’s Network of Resources (NoR) via cloud elasticity and dynamic resource allocation.
CGI has recognized that Oil Spill Detection and Monitoring is an essential service for the Oil and Gas industry, and has worked with ESA to develop an easy-to-use cloud-based near-real-time service, with the capability of big EO data analysis.
In recent years, the need for real-time incident satellite imagery has grown within the oil and gas industry. Following the Gulf of Mexico (Deepwater Horizon) oil spill, it became clear that access to near-real-time satellite data, tasked within hours of an incident and used to inform critical decision-making, could have far-reaching impacts for oil & gas operators in their response to oil spill incidents.
However, effective use of satellite imagery to help drive response decisions has faced a unique challenge in the time it takes from collecting the first image of an area of interest, to a revisit pass of the same area of interest (16 days in the case of Landsat). This was found to be exacerbated in locations that are closer to the equator than to the poles.
The solution is therefore access to more satellites, resulting in non-reliance on a single provider to meet standard needs. This demand for better access to mission-critical Earth Observation (EO) data has led to the ‘Expand Demand Oil and Gas’ project, created by CGI and funded by the European Space Agency (ESA).
This project, which started with ESA in 2018, has had two key high-level requirements:
1. To meet specific operational requirements of the oil and gas industry, established by a steering board of leading Oil and Gas companies;
2. To establish generic EO capabilities within the oil and gas industry and showcase the capabilities of the Sentinel satellite missions and the European EO service industry.
The benefit of this project to its end-users is a service delivering relevant near-real-time EO data. This is provided by a dedicated portal, where information relating to a specific spill incident is gathered and presented in a clear and systematic way to provide a common operative picture to the different stakeholders to support the decision making process. The information provided covers:
a) A timeline of available products from a range of providers.
b) Predictions of future availability of products.
c) Actual products wherever possible
d) Derived services such as oil spill extent mapping.
To meet this challenge, the Oil and Gas Industry Earth Observation Response Platform (OGEO - ReP) has been developed. This platform assists oil spill responses by gathering, processing and displaying a wide range of relevant EO data, including:
a) Satellite data products from a wide range of sources (free and commercial)
b) Predicted acquisitions relevant to the incident
c) Derived products, e.g. Oil spill extent mapping, processed as a hosted service
d) Contextual background information, such as asset locations
The platform solves a clear and present by providing a one-stop-shop for all EO data relevant in a spill incident. It provides access to satellite data products (including predictions of timelines for future acquisitions) from a range of providers via an online portal. This data can then be ordered from the platform as standalone images or with the attributed metadata.
The scalability of the platform allows it to process large amounts of data in a spill event, allowing for the inclusion of swath prediction (to identify potential acquisitions of interest), the mapping of a spill event, and running of oil spill drift models to forecast the behaviour of the spill. This is all presented via an intuitive graphical user interface (GUI) or via an API (based on OGC standards), for integration into customer business processes.
Framed in the improvement of our understanding of the aboveground terrestrial carbon dynamics from EO data, the development of the (Multi-Mission Algorithm and Analysis Platform) MAAP aims to foster the scientific research. The MAAP provides a framework to facilitate the exploitation, analysis, sharing and visualization of massive Earth Observation (EO) datasets and high-level products.
ESA MAAP is an ESA funded project, built by Capgemini with Sistema and CGI Italy as sub-contractors.
The MAAP offers a common platform with computing capabilities co-located with data as well as a set of tools and algorithms developed to support this specific field of research. In addition, the MAAP maximises the exploitation of (EO) data from different missions: ESA BIOMASS, GEDI and NISAR NASA missions. Supporting scientific research and collaboration, the MAAP addresses crucial issues related to increased data rates and reinforces open data policies.
The MAAP presents a set of functions to deal with EO sciences missions and meet their scientific community requirements. This platform:
• Facilitates the discovery, access, visualization and analysis of various sets of EO data, from both ESA and NASA, through a catalogue that offers a centralized and standardized access to various EO datasets such as ESA and NASA EO missions, in situ measures or airborne campaign data, that are hosted on the platform or not. The access to the data is completed by advanced front-end allowing to visualize and analyze 1D, 2D and 3D datasets
• Provides a communal code/algorithm development platform with processing resources, for algorithm developers and scientists related to ESA and NASA. The MAAP provides registered users with a complete cloud native Eclipse/Jupyter environment with Gitlab code repository and continuous integration capabilities. Working on the MAAP, users are provided with cloud storage and computing resources, allowing rapid benchmarking of processing algorithm, but also the creation of standardized and customizable development environment, allowing to easily setup a set of workspaces for a dedicated event such as a training course. The MAAP cloud native solution is scalable and cost effective, adapting to the number of users.
• Offers a processing function dedicated to the computing at scale with a fully automated data processing framework allowing generation of product, able to deal with the processing of huge amount of heterogenous data. COPA is an open-source solution developed by Capgemini, a generic platform allowing scientific or operational community to easily integrate and run algorithm workflows with enough performance to carry out global and real-time studies. COPA enables to manage the sequence or parallel processing and orchestrates them in a distributed and scalable environment. Being an open and generic platform where the processing chains can be easily exchanged, the flexibility of COPA platform allows public and private stakeholder to develop applications from a wide range of space observation topics: forest, biomass, agriculture, natural disaster, emissions…
COPA integrates algorithms as “Docker Images”, thus making it independent from technologies used to implement algorithms.
• Federates scientific community by fostering a spirit of resource and knowledge sharing on common thematic thanks to a set of collaborative tools, such as a forum and a collaborative help section.
The MAAP will participate improving science by exposing the official processing algorithm to every user and making them fully transparent Relying on open data policies, the MAAP enables and eases data and algorithm sharing between users of the platform from both agencies and with external users.
• Enables interoperability of data and services between ESA and NASA, relying on OGC standards and innovations, with an ongoing roadmap.
The MAAP platform is based on opensource libraries and components and hosts data and algorithms with open policies.
The main functions which compose the MAAP are:
• Algorithm Development Environment which enables scientific community to develop the algorithms
• Catalogue for data discovery and access including visualization tool
• Processing platform for data processing at scale
• Collaboration and information sharing functions which deal with all aspects related to data, information sharing based on the access rights that could be defined at user or community level
• Monitoring of the platform good health, the system resources, the egresses, and also monitoring of user’s usage. The monitoring contributes to keep a cost-effective approach for scalable ICT resources capitalizing on economy of scale through infrastructure pooling.
Those functions rely on a dedicated scalable cloud infrastructure and are complemented by other functions such as interoperability of data and services between both NASA and ESA users, security that guarantees the data and services access to allowed users.
The MAAP architecture as depicted below fulfills the user’s requirements.
The MAAP architecture is composed of the following different levels:
• Client side or Front-End that hosts the HMI (Human Machine Interface) and the data discovery and visualization system. This architectural layer oversees the presentation layer by giving access both to microservices deployed on the service layer and the algorithm development platform based on Eclipse CHE and Jupyter
• Back-End that hosts MAAP Services, including data access component. This architecture layer implements MAAP microservices exposed as REST API and consumed by the presentation layer.
• IaaS (Infrastructure as a Services), this layer provides a standardized way to use IT resources for data storage and computing.
• Security: This level integrates the User Management system in order to manage access to the portlets according to the defined access rights and the API management service
• Governance: this level addresses tools and methods for MAAP operations including monitoring and supervision.
Based on opensource and Cloud Native technologies, the MAAP is deployed on Orange Public Cloud and could be deployed on any Cloud infrastructure that provides Kubernetes Cluster. This choice enables the platform auto scaling. The following diagram presents the physical architecture of the MAAP.
The DYDAS project was aimed at developing a collaborative platform for offering data, algorithms, processing and analysis services to a large number of users from different public and private user communities. The platform will act as an e-marketplace enabling transactions for accessing data and added value services enabled by HPC and based on Big Data technologies, machine learning, AI and advanced data analytics, with the purpose to match demand and offer among those who own intellectual properties on data/methods for their use and those who need or want to exploit them.
In line with the objective of the CEF 2018 work programme and the CEF-T-5 call, the project contributes to the European data infrastructure by improving the sharing and re-use of public and private data. By enabling the use of dynamic data sets such as Earth observation satellite, in situ data from environmental monitoring networks and vehicle data, promoting HPC-based R&D through an integrated research laboratory and scientific knowledge and collaboration system, offering easy-to-use HPC-based services and tools, through specialised interfaces, and designed to provide different user experiences to a wide range of users. A key and differentiating element of the project will be the implementation of a Geospatial Data architecture connected with a dedicated Data Lake and an HPC processing framework. This specific components through the adoption of a geospatial data model and interoperability rules, allow seamless integration and processing capabilities of extremely large data sets for innovative use modes. Furthermore a large ensemble of dataset connectors are available to facilitate the Machine-to-Machine (M2M) acquisition of several datasets such as Copernicus Satellite data or Copernicus Services as well as for other kind of satellite data. In addition, DYDAS promotes the sharing and re-use of public and private data in a secure environment and through innovative monetisation mechanisms. This collaborative platform will act as an e-marketplace for data access, but as added value it will be equipped with HPC-enabled services based on Big Data technologies, machine learning, AI and advanced services. The project has tested the data analysis capabilities of the platform through the integration and operation of various use cases which relevant results will be presented.
With the ongoing proliferation of open-access but also commercial satellite imagery, from both optical and synthetic aperture radar (SAR) sensors, the number of downstream applications is rapidly growing. Developers of Earth Observation (EO)-based products and services, as well as expert and non-expert users of such tools, thus need access to a cloud computing infrastructure offering interoperable analysis functionality. Here, we present the versatility of such a cloud-based infrastructure called WASDI. WASDI, a web-advanced space development interface, is an online platform where EO experts can develop and deploy applications (apps) and end-users can employ them to process satellite images on, demand to generate value-added products, services, and solutions.
The idea is very simple: turn EO data into actionable information for as many end-user segments as possible, while leaving the end-user in control of execution. This is implemented in WASDI by integrating a robust online cloud computing infrastructure, interoperable machine-to-machine use, and scientifically complex EO algorithms with an easy-to-use developer and user environment platform online.
This setup is powerful since it allows EO experts and EO application users to be close to each other by using the same platform but operate in an environment of their own choosing. On the one hand, experts can develop an EO application using their development environment of choice and the programming language they prefer, control the cloud behavior with their code, and then just simply drag and drop it onto WASDI to deploy it in the cloud for free, with the aim to scale up using the offered cloud computing and marketplace services. On the other hand, end-users can use these technically sophisticated applications to transform EO data into actionable products, services, and solutions, with the click of a button and in a few very simple steps.
The WASDI marketplace is directly comparable to the very popular smart phone app stores and offers a growing body of free-to-use and payable applications, ranging from basic remote sensing indices, such as the Normalized Difference Vegetation Index (NDVI), to more complex applications, such as burnt area mapping or flood hazard mapping. The marketplace is being enriched with new applications to address as many end-user needs as possible, so that the true power of EO can be unlocked as quickly as possible and in a democratized manner.
INTRODUCTION
The digital revolution is expanding the added value of the Earth Observation applications. This is offering unprecedented opportunities to research, industry, and institutions to tackle diverse global and regional challenges.
Despite this, data preparation, processing, analysis, and visualization tasks are not free from challenges. These tasks require time, expertise, and IT resources to be managed. The EO community increasingly needs flexible EO solutions preventing them from having to deal with the complexity of building and maintaining their own data infrastructure. This would allow them to focus on quality research and achieve their research goals faster. In this regard, the ESA Research and Service Support (RSS) service [1] paved the way by successfully implementing the “bring user to data” paradigm as well as demonstrating the concepts of Virtual Lab for Education [2], Virtual Research Environment [3] and Data Valorisation Environment [4].
To overcome the more and more complex challenges emerging in the EO domain, we present EarthConsole® (www.earthconsole.eu), a cloud-based platform inspired by the strengths of the RSS model whose objective is to facilitate the exploitation of EO data. To achieve this goal, EarthConsole® provides a unified solution to access data, develop and test algorithms, run scalable processing campaigns and analyze their results.
EARTHCONSOLE®
EarthConsole® is a set of three complementary support services: G-BOX (Integrated Algorithm Development and Execution Environment), P-PRO (Parallel Processing Service) and I-APP (Application Integration Service).
G-BOX offers a cloud-deployed virtual machine (VM) suitable for algorithm development and testing, based on two Linux OS distributions templates. The VM allows for a fast access to the dataset offered by the Data and Information Access Services – DIAS, relieving the users from the costly remote download of data.
The main goal of G-BOX is to provide EO data users with the needed resource flexibility to easily perform their own processing. Therefore, the cloud virtual machine can be accessed either via command line, through a remote desktop client, or via web browser to create and edit Jupyter Notebooks. The virtual machine comes with pre-installed packages and software supporting EO data exploitation to reduce the configuration burden on users: SNAP, QGIS, R, BRAT, and JupyterHub for quick data analysis and visualization. Additional software can be installed on request.
In addition, G-BOX offers a flexible amount of CPUs, RAM and dedicated storage tailored to users’ requirements, that can be upgraded compatibly with the cloud infrastructure constraints. The VMs are also available in a multi-user mode to share code and data in a common development environment while keeping individual workspaces. The VM infrastructure also enables configuration of tens of virtual machines with the same settings, which makes it an ideal solution for training purposes.
P-PRO enables the users to perform scalable processing campaigns on huge datasets. It offers a High-Performance Computing environment optimized for the execution of EO data-intensive applications, based on cloud computing and distributed systems technologies. Following a set of guidelines, custom EO algorithms can be integrated into the platform which will automatically parallelize and distribute the application batch processing operations among the computing cluster resources.
P-PRO relies on a centralized orchestrator, the parallelizer engine, to partition the application input data and operations into smaller tasks and to distribute them over a set of computing nodes where they will be executed in parallel. Once all the tasks have been completed, the orchestrator gathers the results of the parallel computation and makes them available to the user. The technology adopted for the Computing Cluster management is based on the SLURM Workload Manager [5], which is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters.
The cluster resources are managed by a Cloud Scaling Engine capable of automatically adapting the amount of computing resources based on the workload of the cluster. In this way the platform ensures that the parallelization effort is always matched with an adequate amount of computing power.
The computing cluster can be deployed on any OpenStack-based cloud platform. To lower the barrier towards EO data access and storage, P-PRO resources are deployed on the DIAS infrastructure, which provides access to their EO data catalogue locally from their cloud resources.
With the parallelization of the computation, the flexibility of the amount of cloud resources, the local access to EO data and the ease of custom application integration in the platform, P-PRO seamlessly brings its users close to the optimized computing power required to execute their processing operations and to the necessary features to speed up their work. As a matter of fact, the benefits are both on the user side as well as on the operations side, since the automatic scalability of the cluster allows for an optimal resources utilization resulting in time and cost savings also from the platform administration perspective [6].
Work is ongoing to finalize the implementation of the on-demand version of the P-PRO service. With this new flexible delivery model, users will be able to configure, launch and monitor small processing tasks through a user-friendly web interface. The P-PRO on-demand portal has already been opened to a selected group of beta testers and it is planned to be launched officially by the end of 2021.
The I-APP service offers expert support to users to integrate their EO application or a third-party application in the P-PRO environment for processing. EarthConsole® operators will support users who do not have the necessary time, expertise, or IT resources knowledge to adapt their applications for integration into the EarthConsole® Parallel Processing (P-PRO) environment.
Currently, several processors are already available in the P-PRO Catalogue of Processors: the SARvatore for Cryosat-2, SARvatore for Sentinel-3, SARINvatore for Cryosat-2, ALES+ SAR Retracker – TUM, TUDaBO SAR-RDSAR – U.Bonn; Fully Focused SAR for CryoSat-2 - Aresys, Sentinel-1 Amplitude Change (SNAC), Coherence and Intensity change for Sentinel-1 (COIN), and Sen2Cor for Sentinel-2. Additional processors are being integrated and will be available in a short time [7].
EarthConsole® is also available via the ESA EO Network of Resources (NoR) as a Platform Service: customised VMs (G-BOX) and ad-hoc processing services (P-PRO) are offered to scientific, educational, and pre-commercial users.
CONCLUSIONS AND FUTURE DEVELOPMENTS
The need for flexible EO data exploitation solutions, able to reduce the burden of infrastructure management on users, is a shared issue within the EO community nowadays. Progressive Systems has implemented the EarthConsole®, a cloud-based platform characterized by a high-degree of flexibility, high-speed access to EO data, strong computing power and ready-to-use tools for EO data analysis and visualization, enabling users to shorten, and in some cases eliminate, the time dedicated to data preparation, processing, visualization, and infrastructure management.
Feedback for the improvement of EarthConsole® is currently being collected from selected groups of stakeholders to deliver services which are more and more centered on users’ needs.
In the frame of the Quality Assurance Framework for EO project, a user’s group is currently assessing the validity of EarthConsole® to fully exploit its potential in response to the Cal/Val community's needs.
P-PRO ON DEMAND is in beta testing phase by a group of stakeholders from the ESA altimetry community.
In addition, EarthConsole® operators are currently developing EarthConsole® Virtual Labs: virtual spaces designed on the needs of specific communities of EO data users to access customized EarthConsole® services, and tools to network and share information with colleagues working in the same domain, all from a single environment. The first of these labs will be dedicated to the Altimetry community and is currently being developed under a programme of, and funded by ESA.
The work presented is just the beginning of the evolution of EarthConsole®, whose services will be enriched over time based on customers’ emerging requirements.
REFERENCES
[1] P. G. Marchetti, G. Rivolta, S. D’Elia, J. Farres, G. Mason, N. Gobron (2012) “A Model for the Scientific Exploitation of Earth Observation Missions: The ESA Research and Service Support”, IEEE Geoscience and Remote Sensing (162): 10-18, 2012
[2] F.S. Marzano, M. Montopoli, S. Leonardi, G. Rivolta (2016)
“Data Science Master’s Degree at Sapienza University of Rome: Tightening the Links to Space Science”
Oral presentation - Presented at BiDS’16 - Santa Cruz de Tenerife
[3] P. Sacramento, G. Rivolta, J. Van Bemmelen (2018)
“ESA’s Research and Service Support as a Virtual Research Environment for Heritage Mission data valorisation”
PV2018: Proceedings of the conference
[4] P. Sacramento, G. Rivolta, J. Van Bemmelen (2019)
“Towards a Heritage Mission Valorisation Environment”
Poster 26 - Presented at BiDS’19 - Munich
[5] https://slurm.schedmd.com
[6] R. Pascale, R. Cuccu, G. Sabatino, G. Rivolta, M. Iesué (2020). “P-PRO – The EarthConsole Parallel Processing Service” Poster presentation - Presented at the ESA EO Φ-WEEK 2020.
[7] M. Iesué, C. Orrù, G. Sabatino, R. Pascale, R. Cuccu, G. Rivolta (2021) “EarthConsole: a cloud-based platform for earth observation data analysis” Poster presentation - Presented at the ESA EO Φ-WEEK 2021.
Great cloud solutions for the Earth Observation are gaining adoption, successfully exploiting the scaling potential of solutions based on standards like those provided by the Open Geospatial Consortium, OGC. These solutions exist as standalone and federated architectures implementations. Explored as federations, we now experience a growing efficiency in integrating data products shared by various data providers. Maturing solutions not only streamline daily routines but even allow secure data exchange for new experiments and scenarios. With more and more technologies moving to the cloud, access, replication, and handling of geospatial data are experienced at a new level. Just, we are still at the beginning of this amazing transfer from heavy SOAP-based web services towards lightweight and cost-efficient modern service portfolios. Various research aspects need to be addressed to further enhance and establish interoperability within the growing landscape of data offerings and processing capacities. Collaborative research efforts are best suited to address these interoperability challenges, as they include a variety of players natively. The OGC Testbed series belongs to these types of research and development efforts. Conducted yearly, these testbeds bring a broad community of geospatial experts together with sponsoring organizations to address current needs and to boost infrastructure developments to allow for excellent thematic research and business opportunities. The OGC Testbed-17 is the latest Testbed, which addressed several essential research topics in the context of EO data downstream and processing standardisation. These include data security, new data formats, and unified data cube APIs that were effectively exercised to propose technological advancements.
Firstly, organizations have invested significant resources in Geospatial Data Cubes. The infrastructure shall support the storage and use of multidimensional geospatial data in a structured way. Geospatial Data Cubes (GDCs) are already addressing specific needs with the solutions built upon them. However, challenges remain to enable broad access, limiting their ability to support widespread use. The OGC Testbed-17 explored the use of the Web APIs that can support GDCs in a uniform, standardized way from multiple environments (e.g., other GDCs, platforms, various file systems, etc.). The support of discovery, access, sharing, and use of GDC data should enable workflows involving distributed computing resources and algorithms.
Secondly, as organizations move to the cloud and data sharing between clouds and cloud services is growing, it is essential to incorporate Data Centric Security (DCS) into the design of the emerging cloud infrastructures. That enables the use of cloud computing even for sensitive geospatial data sets. The applicability of Zero Trust through a DCS approach was applied to vector and binary geospatial data sets (Maps, Tiles, GeoPackage containers) and OGC APIs. The results have shown the potential of the standards and their extensions.
To boost performance, data transfer reduction between processing hubs is essential. In this context, Testbed-17 explored the usage and value of Cloud Optimized GeoTIFF and Zarr for raster data. In the case of remote sensing big data, slicing the data and efficiently serving chunks is a key to efficiency. For the planned OGC standardisation process, the work focused on Zarr relevance for EO data, taking other OGC standards, business values, and synergy with easy-to-use remote sensing data catalogues such as SpatioTemporal Asset Catalogs (STAC, https://stacspec.org/) and OGC APIs (Features, Tiles, Maps) into account.
This presentation will provide an overview of key results from Testbed-17, show the paths towards cloud-native processing, and outline important experiences and lessons learned.
The Charter is a worldwide collaboration, through which satellite data are made available for the benefit of disaster management. By combining Earth observation assets from different space agencies, the Charter allows resources and expertise to be coordinated for rapid response to major disaster situations; thereby helping civil protection authorities and the international humanitarian community.
Terradue was selected to design and operate a new online service to visualize and manipulate the satellite acquisitions at full resolution. After several months of development, in September 2021 a new portal named ESA Charter Mapper, was officially opened to support Charter operations and in particular the product screening activities. Behind the portal is deployed a cloud native platform integrating the latest state of the art technologies for a seamless visualization and manipulation of the satellite imagery directly from a web browser.
Product screening requires the production of high-resolution RGB composites from a constellation of some forty satellites from fifteenSpace Agencies with many different metadata and data formats. This first challenge is tackled with a metadata harmonization for all missions. During the development phase, a promising common metadata language dedicated to SpatioTemporal Assets Catalog (STAC) was emerging and the chance was taken to use it extensively. STAC provides an abstraction layer and reduces the EO data heterogeneity defining a synthetic interface to data highlighting the concept of assets supporting cloud media types (e.g. geotiff, tiff, binary, jpeg2000) in single or multi-band enclosures. The Charter Mapper development contributed directly to the new standard by providing useful extensions to manage processing lineage or raster information.
At this stage, the multi-mission product screening is the main use case of the Charter Mapper and thus requires the pre-processing of the satellite acquisitions to have Analysis Ready Datasets to allow an homogeneous visualization and ready to process data of the different satellite imagery and the usage of downstream and value adding services such as Flood or Burned area delineation and intensity, Active fires detection, Lava flows identification, Ground Change detection or yet Interferometry. Basically, each remote sensing scene is calibrated to a common processing level. Optical datasets are transformed to Top-of-Atmosphere or surface reflectance values for each of the spectral bands and SAR datasets are converted to Sigma0 backscatter values in all available polarizations. The resulting rasters are then saved in a Cloud Optimised GeoTiff (COG), a format that eases the remote access of data chunks. Combined with S3 Object Storage, they offer a fully ranged data access allowing COG-aware software to stream only the portion of data that it needs, improving processing times and creating real-time workflows previously not possible.
Disasters are unfortunately unpredictable and the call activations occur according to the natural hazards. Therefore, subsequent satellite acquisitions will eventually flow in the system in a very fluctuating way as well as the usage of the service by the project managers. All system components must be able to scale according to the processing load at every stage: download, harvesting, pre-processing and synthetic/systematic product. Every software unit is packaged as containers and deployed on a Kubernetes infrastructure managed by Helm charts and supervised by a Continuous Deployment tool. Combined technologies ensure reliable and efficient maintenance and operations of the whole system. Indeed, the software upgrades are performed without downtime and transparently for the user.
This project demonstrates the maturity of the previously listed technologies to implement a comprehensive cloud native platform with optimized data access and processing for an operational usage of satellite imagery. This is a fundamental basis for rapid response in the context of the Disasters Charter.
Abstract
Pursuing Copernicus data and Service exploitation requires innovative application of ICT solutions to address the related “Big Data” issues involved in both EO and non-EO (such as meteorological data and data stemming from social media) data processing characterized by the Big Data four Vs: volume, velocity, variety, veracity.
The last five years has seen a rapid succession of technologies supporting the development of cloud-based solutions, numerous something-as-a-service and architectures to guide and order our approaches with a goal to achieve consensus on how we implement platforms.
In this paper we present the evolution of the use of state-of-the-art architectures and technologies applied to the Automated Service Builder Framework, an application agnostic infrastructure-as-a-service for implementing complex processing chains over globally distributed processing and data resources designed to meet the EO paradigm change (“bring the user/algorithm to the data”). The evolution of ASB has been driven by 3 key projects: the Proba-V Mission Exploitation Platform - Third Party Services (MEP-TPS) project, completed 2019, the H2020 project EOPEN (Open interoperable platform for unified access and analysis of Earth Observation data, https://eopen-project.eu/) completed 2020 and the ESA project EOEPCA (EO Exploitation Platform Common Architecture) that is ongoing.
The ASB framework (https://www.spaceapplications.com/products/automated-service-builder-asb/) provides a set of functionalities needed to develop a service to support systematic processing of complex processes. It is fully Web based providing tools to import and containerize user defined algorithms, graphically edit workflow definitions by integrating built-in and imported processes, execute workflows with user-defined parameters, and access the results in user personal datastores. Generic workflow tasks are available to ingest the data in various kinds of databases and services. The workflow editor uses state-of-the-art solutions such as a customizable ontology-based to validate workflow definitions.
Generic and flexible orchestration
The platform generic and flexible orchestration capabilities mean that workflow tasks are orchestrated by the built-in Workflow Engine independent of the location of the actual executable files and independent of the underlying programming languages and related technologies. Within a given workflow each task may be deployed and executed in one of many platforms supported by ASB that a user selects using the workflow editor.
This brings the convenience of executing specific processes on the platform where the data is located.
Collaborative working
In the MEP-TPS project a need for collaborative working was introduced where actions users may perform on their resources (including processes and workflows) are assigned through workspaces. A decentralised management concept was developed allowing individual users to decide what they share and with whom. A shared process may be integrated by other users in their own workflows. A shared workflow may be selected and executed by the users that have the appropriate role in one of the workflow workspaces.
Scalable solutions
The EOPEN required a scalable solution exploiting the back offices providing services to downstream service providers and consultancy organizations performing Big Data analytics. EOPEN (https://eopen-project.eu/) includes the provision of an easy to use environment to implement user services and capabilities to fuse Sentinel data with multiple, heterogeneous and Big Data sources; additionally, the involvement of mature ICT solutions in the Earth Observation sector addresses major challenges in effectively handling and disseminating Copernicus-related information to the wider user community.
Cloud native features
In EOEPCA use of the common building blocks requires using Cloud-native features and services such as S3 services, the dynamic allocation of processing resources, and the ability to execute user defined functions in an environment controlled by the infrastructure provider. Such services are typically imagined and realized by one cloud provider and if their popularity increases (they become key features), variations of these appear in the competition. The ideal situation for the customers is when the specifications of a service is made public and adopted by other providers as it makes it possible to build portable applications that are not locked in a single environment.
A feature only supported by a single cloud provider will not be added in ASB unless there is a strong need for it. For EOEPCA ASB now uses S3 buckets for providing user personal but also shared datastores because this technology has been standardised and widely adopted.
Container orchestration tools
ASB had not been designed to be deployed in a Kubernetes cluster and thus does not have this dependency. However, because Kubernetes is now widespread, support for it is being added to the framework, as an alternative to the originally selected technologies based on Mesos and Marathon. To illustrate the decisions and trade-off's needed when adopting new technologies Kubernetes is not able to deploy and execute processes in remote environments, it cannot replace Mesos and Marathon.
The container orchestration tools (Mesos with Marathon, and now Kubernetes) allow ASB, and thus also ASB-based platforms such as EOPEN , to seamlessly integrate in any cloud-based environment, including the DIAS platforms. These tools are aware of the available compute nodes and are thus able to automatically balance the load when new process executions are requested. They are also capable of automatically discovering the changes in the processing environments, thus providing scalability to the platform. For example, as soon as a new compute node is detected, this is added to the pool of available resources and becomes available for running the subsequent processes.
Conclusion
A framework such as ASB designed to support the development of service platforms needs to stay abreast of technology developments. Existing and new technologies will continue to be investigated and assessed in order to keep the ASB framework at the forefront of state-of-the-art solutions.
A full demonstration can be provided of the ASB solution for EOPEN.
Keywords: Development Platform, Workflow Orchestration, Distributed Data Processing, Visual Analytics, Infrastructure-as-a-Service
Unleash the power of geospatial Data with OneAtlas.
Airbus is committed to support value creation using satellite-based EO Data and Analytics.
OneAtlas platform helps innovators get started faster by giving them quick and easy access to premium imagery in both streaming and download formats. It is designed to lower risk and encourage experimentation through an intuitive experience.
Access to the latest Pléiades and SPOT satellite imagery offers accurate, up-to-date ‘true’ views of any ground activity across the globe, giving researchers and developers verifiable insights and make better informed decisions.
OneAtlas Data and Analytics has been developed for the future. Today, it takes advantage of the latest imagery and technology available, and for tomorrow, the roadmap in place incorporates groundbreaking engineering, like new satellites, high altitude pseudo-satellites and drones.
Are you ready to unleash the power of geospatial data?
Fresh water is an essential resource that requires a close monitoring and a constant preservation effort. The evolution of hydrological bodies’ water level constitutes a key indicator on the available quantity of fresh water in a given region. The limited extent of the in-situ networks currently deployed has generated a growing interest in using space borne altimetry, originally designed to precisely track ocean elevations, as a complementary data source to increase the coverage of emerged fresh water stocks and ensure a more global and continuous monitoring of their water surface height (WSH). That is why a great effort has been made over the past decade to improve altimeters’ capability to acquire quality measurements over inland waters at global scale (Biancamaria et al. 2017).
The Open Loop Tracking Command (OLTC) mode, which consists in calibrating the altimeter signal acquisition window with a prior information on the overflown hydrological surface height, represents a major evolution of the tracking function. The accuracy of the command directly determines the quality of the received waveforms. This tracking mode efficiency is such that it is now stated as operational mode for current Sentinel-3 (S3) and Jason-3 (J3) missions as well as the recently launched Sentinel-6A (S6A) mission.
Over continental surfaces, the commands are derived from a worldwide database of hydrological targets overflown by the altimeters. To ensure a smooth signal acquisition, a 10m command precision is needed. Therefore, the targets location and elevation data from hydrology users are keys to improving the on-board elevation values and consequently optimize the altimeters performances over continental surfaces (Le Gac et al., 2019). The higher the number of precisely defined targets, the more global and efficient the monitoring of emerged worldwide fresh water stocks via altimeters will be.
In this context, ESA, CNES and NOVELTIS jointly developed the https://www.altimetry-hydro.eu/ web portal to further optimize the altimeters exploitation for hydrology applications. This free online platform has three main goals: communicate on the OLTC capabilities, share its current contents with the hydrology community users and offer users a convenient way to submit their improvement requests. This web portal lets the visitor explore directly, smoothly and interactively the on-board tracking command elevation value of the operational Sentinel-3A&B and forthcoming Sentinel-3C&D missions. Two categories of users may access the platform: visitors and contributors. Visitors may display the current elevation commands and find relevant information on the OLTC. They can set the display configuration that best fit their needs: OLTC version for a given altimetry mission, vertical reference of the data (ellipsoid or geoid), ground track (number and direction) characteristics. Figure 1 shows the display of the multiple layers a visitor can activate (zoom over western France).
The users may choose to register on the platform and eventually become active contributors to the database by submitting their own targets elevation and location information under the satellite ground tracks. These inputs may either lead to update existing targets or create new targets. Contributors can choose to either interactively submit their data on the map or send a csv file. Once their data is submitted, an email lets them know that their request was correctly processed and will further be analysed before operational integration for the next OLTC tables update.
This service objective is to reach the largest possible audience to collect accurate data from worldwide users working on different hydrological bodies. Since its creation on February 2018, more than 750 visitors and 80 contributors have registered and the number of registrations is rapidly increasing in the past few months. Contributions are still scarce but updating an existing target or requesting the creation of a new one significantly impacts the altimeters acquisitions and allows to contribute to the altimetry products value chain for continental hydrology, which ultimately ensures a better management of fresh water stocks.
Finally, this presentation will include a demo of the website and we will present some of the future evolutions expected for this web service.
References:
Biancamaria, S., F. Frappart, A.-S. Leleu, V. Marieu, D. Blusmtein, J.-D. Desjonquères, F. Boy, A. Sottolichio, A. Valle-Levison, Satellite radar altimetry water elevation performance over a 200m wide river: Evaluation over the Garonne river, Adv. Sp. Res. (59), 128—146, January 2017. https://doi.org/10.1016/j.asr.2016.10.008
Le Gac, S., et al., Benefits of the Open-Loop Tracking Command (OLTC): Extending conventional nadir altimetry to inland waters monitoring, Advances in Space Research, 2019, In Press, https://doi.org/10.1016/j.asr.2019.10.031
Australia’s national science and geoscience agencies, the Commonwealth Scientific Industrial Research Organisation and Geoscience Australia, are supporting the growth and implementation of Earth observation-based products and services in South-East Asia.
The ‘Earth Observation for Climate Smart Innovation’ initiative (EOCSI) has built a new regional Earth observation analysis platform powered by CSIRO’s Earth Analytics Science and Innovation hub and Open Data Cube technology. The platform leverages the wealth of open-access Earth observation data and Amazon Web Services Singapore cloud infrastructure. Access is via Jupyter notebooks, with users having their own work space where they can develop, save and share notebooks and create their own computing clusters with Dask (Python library for distributed processing) for scalable, flexible and cost-effective analysis, from local to regional scales. Users can easily access pre-indexed data allowing them to focus on the analysis instead of sourcing and assembling Earth observation data (semi-)manually. The platform is also pre-loaded with data applications developed by Australian scientists and tailored for South-East Asian environments, including for instance: in-land and coastal water quality assessments; land cover classification; and water body mapping. These applications build upon the tools available via Geoscience Australia’s Digital Earth Australia platform.
Regional Earth observation collaboration allows us to share infrastructure, data, knowledge, expertise and ideas to address shared challenges. This new platform is being used to engage local government, business and education institutions throughout South-East Asia to take advantage of Earth observation for the development of ‘climate smart’ applications. Through training and business opportunities, we are building new and closer relationships between Australian Earth observation practitioners and South-East Asian counterparts to strengthen regional science relations, support climate resilience, and promote sustainable growth and development.
This presentation will showcase the platform, early adopters and their case studies, plus provide information on how others can engage with this initiative.
Geo Engine is a cloud-ready geospatial analysis platform that provides easy access to geospatial data, processing, interfaces, and visualization. Users can perform interactive analyses in Geo Engine. For this, they access the system in a browser-based user interface as well as with Jupyter notebooks in Python. In this presentation, we will show the fundamentals of the system and its characteristics. We illustrate this with examples from previous scientific projects. In addition, we offer an outlook on the integration of Deep Learning into the Geo Engine that utilizes its preprocessing capabilities for remote sensing data and show possible applications for a wide range of projects.
Geo Engine GmbH is a start-up of computer science and geography researchers at the University of Marburg. They develop Geo Engine as an open-source project that is well-suited for research projects. In addition, several advanced features are provided under a commercial license. Geo Engine is used in various research and infrastructure projects like DFG’s NFDI4BioDiversity, and is already running in production in commercial applications.
The recent hype with data cube access has one underlying goal: A harmonized access to remote sensing data of various kinds and from various sources. Researchers want to incorporate multiple datasets in their analyses because such combinations give a broader view on real-world phenomena, and thus lead to new insights that would otherwise not be revealed. The major challenge here is that different data formats, resolutions of sensors, coordinate reference systems, and data types make it hard to focus on the actual task without re-engineering data access and data harmonization. Moreover, those engineering problems arise again for every single task. Data cubes solve this by converting all data into a harmonized format and resolution, which is similar to creating data marts in the domain of data warehousing. However, the downside of this approach is that the created format is only well-suited for a specific set of use cases. For instance, having specified a cube with a 10m resolution would downscale data from new sensors with a 1m resolution and new use case scenarios could not benefit from this without changing the cube.
Our approach is to harmonize the data ad-hoc within the analysis workflow rather than upfront. The advantage is that we can address data in their original form, i.e. without losing information or precision. At the same time, Geo Engine automatically harmonizes multiple data sources if required. Moreover, users work with temporal datasets rather than files. In more detail, temporal datasets are an abstraction of the individual files and their location. Geo Engine takes care of loading the correct data for individual points in time and space. Thus, users define workflows by incorporating geospatial time series. For instance, when querying an infrared satellite imagery band in 100 m resolution in North America in 2019, the system knows which files to load or which S3 bucket to connect to. Data harmonization takes place whenever multiple datasets are included in a workflow. A possible downside of this approach is that preparing and indexing data combinations beforehand usually exhibits better performance afterward. We tackle this by employing caching strategies and reusing partial computation results. In practice, this provides performance similar to preprocessed data for subsequent queries or interactive workflows while offering much more flexibility.
Geo Engine provides more than pure data access. First, we implemented a data provider API that allows adding data from either local or remote sources. Second, we provide an extensible processing toolbox that, for instance, contains means for filtering and data combination. Third, Geo Engine supports parametrized and reusable workflows. Once a workflow is defined, it can be reused for different spatial regions or different points in time. In more detail, queries to a workflow define a spatial as well as a temporal extent as a first-class citizen. Workflows are suited for short- and long-running tasks alike and are defined independently of the data size. For realizing this, Geo Engine employs asynchronous and chunk-based computations, e.g., on a tile basis for raster data, to process small as well as huge datasets inside workflows. This, for instance, makes AI workflows possible where the data flows from preprocessing into model training. The resulting machine learning models are afterwards reused as Geo Engine operators. Finally, all this functionality can be accessed via a UI for exploratory, visual data handling or via Python for fine-grained, programmatic control. By providing the described functionality via OGC-compliant interfaces, Geo Engine is a well-suited service for operating as part of any geo-related process.
Datacubes are acknowledged as a cornerstone for analysis-ready data. Following the pioneer work of the rasdaman team coning Actionable Datacubes a series of epigons is emerging, with varying mileage of functionality, performance, degree of standards support as reviews like [7] and [8] show in detail. However, these give access to only a single service whereas many services with different offerings are available; hence, accessing these, e.g., for combining data from different services again is the burden of the user, requiring download, homogenization, and effectively local Big Data processing.
The EarthServer initiative is working towards the vision of a single integrated, homogenized, location-transparent datacube pool. In analogy to the term “server-less” such a federation might be called “date-center-less” as users do not need to know the concrete data location any longer.
Meantime EarthServer has established the first such data¬cube federation of Earth data centers [5]. Users get provided with a single, uniform information space acting like a local data holding, thereby establishing full location transparency. Underneath, EarthServer uses Array DBMS technology for its datacube services. We present the federation and show a broad range of real-life distributed data fusion examples.
• All rater data uniformly appear as ISO/OGC/INSPIRE coverages [4], regardless of their storage representation.
• All datacubes uniformly are offered through the OGC/INSPIRE Web Coverage Service (WCS) suite, including the Web Coverage Processing Service (WCPS) datacube analytics language [2]. These services can be extended server-side with arbitrary custom code for bespoke functionality.
• All datacubes can be combined in any data fusion, regardless of the participating datacube’s location; effectively, this establishes transparent distributed data fusion.
• Functionality is available through a wide spectrum of clients, ranging from zero-coding point-and-click clients and WCPS queries to python and R access. For example, processing results can be returned directly as python xarray or numpy arrays. But likewise OpenLayers, Leaflet, NASA WebWorldWind, Microsoft Cesium, QGIS, and others are supported.
• Here is no single point of failure in the federation, it is strictly peer-to-peer.
• The service infrastructure allows a seamless offering of both free and paid data and services, thereby integrating public data like DIASs with commercial offerings.
• Maintenance and continuous update of datacubes is done administrator-less allowing data centers to join the service without assigning staff resources.
• Fine-grain access control allows data centers to decide what data to offer, and to whom. Not only can complete datacubes be protected this way, but also regions within a datacube down to single-pixel granularity.
The large, continuously growing EarthServer federation is boosted by rasdaman, the pioneer datacube engine [1]and de-facto gold standard for datacube services. For example, in 2019 the rasdaman query language has been adopted in the SQL data¬cube extension [3]. Implemented in fast C++ it offers particular performance and scalability, including distributed query processing. Metadata can be added, maintained, and retrieved freely, in particular: INSPIRE metadata. In fact, rasdaman constitutes the acknowledged INSPIRE Coverages Good Practice [6]. This effectively integrates Copernicus and INSPIRE data seamlessly. Its concept of Virtual Coverages allows users to see single datacubes even where the underlying data are heterogeneous; an example constitute the Sentinel data coming in different UTM zones: datacube queries will access a single virtual coverage where the server internally performs all necessary mapping to the base data and their coordinate systems.
When ingesting data they can be stored in a number of formats through an OGC WCS-T standard based ETL layer which homogenizes data and metadata, provides defaults, as well as the target tiling strategy. Further tuning parameters include compression, indexing, cache sizing, etc. The resulting OGC compliant coverages represent analysis-ready space-time EO objects.
As of today, EarthServer offers a critical mass of dozens of Petabytes of multi-dimen¬sional raster data, including 2D DEMs, 3-D satellite image timeseries, and 4-D atmospheric data. Members include several DIAS European Copernicus archives, leading supercomputing research centers, as well as a series of specialized services offering high-level marine, land use, and atmospheric products. All these data are accessible with zero coding, in particular: without the need to know python, and strictly standards compliant.
Aside from continuously advancing rasdaman technically, an aggressive growth of the EarthServer federation is ongoing; a line-up of datacenters has expressed interest, and the charter for governance is being finalized.
ACKNOWLEDGEMENT
Research supported by EU EarthServer-1/-2, Land¬Support, CopHub.AC, PARSEC, CENTURION.
REFERENCES
[1] P. Baumann: Language Support for Raster Image Manipulation in Databases. Intl. Workshop on Graphics Modeling, Visualization in Science & Technology, Darmstadt/Germany 1992, pp. 236 – 245
[2] P. Baumann: The OGC Web Coverage Processing Service (WCPS) Standard. Geoinformatica, 14(4)2010, pp 447-479
[3] ISO: 9075-15:2019 SQL/MDA (Multi-Dimensional Arrays). https://www.iso.org/standard/67382.html
[4] OGC: OGC Spatio-Temporal Coverage / Datacube Standards. http://myogc.org/go/coveragesDWG
[5] n.n.: The EarthServer Datacube Federation. https://earthserver.eu
[6] n.n.: INSPIRE Coverage Good Practice. https://inspire-wcs.eu
[7] P. Baumann, D. Misev, V. Merticariu, B.H. Pham: Array databases: concepts, standards, implementations. Springer Journal Big Data 8, 28 (2021). https://doi.org/10.1186/s40537-020-00399-2
[8] H. Kristen: Comparison of Rasdaman CE & AGDCv2. https://gitlab.inf.unibz.it/SInCohMap/datacubes/-/blob/master/datacube_comparison/datacube_comparison.md
The success of tools like Google Earth Engine demonstrates the power of readily available data. Unlike the `traditional' route of manual scene selection, downloading and pre-processing, such services have all data directly available to the user to work with. In addition, the data archive and computational facilities are conveniently co-located and taken care of. Revolutionary was this combination of (i) direct access to global, full time series of satellite remote sensing data; (ii) co-location of data and computational resources; and, (iii), fast large scale analysis, via an image pyramid.
However, the use of external tools is not always favourable. For example in an educational context, where novice users are introduced to satellite data analysis in a simplified environment. Furthermore, there could be legal constraints on the data or algorithms used and external tools may not support non-standard variables, such as complex radar imagery, local coordinate systems, or regional analysis at full resolution. In these scenarios, a custom data cube could be a welcome alternative.
Like aforementioned services, data cubes provide users with standardised access to vast amounts of data, and are well suited for the spatial-temporal properties of satellite remote sensing data. They offer seamless access to the data in space and time. Especially time series analysis is simplified, as time was previously typically divided over different data products. However, pre-processing is required to generate a data cube from the individual products downloaded from a space agency, like ESA.
We developed a simple, yet effective, data cube generation script for Sentinel-2 imagery, based on python and the Zarr storage format. Generation of the data cube is a three step process. First, the imagery of the selected granule (tile) and orbit is downloaded in bulk. Second, atmospheric corrections are applied via `sen2cor' where necessary. Third, the imagery is reprojected to the desired coordinate reference system and the cube is filled or updated.
We focused on usability under standard office conditions in educational or development settings, rather than on factors relevant to production systems such as efficient storage or bandwidth cost. These data cubes should fit into storage structures typically found in office environments, and should not require complex (cloud) computing infrastructure, but may still be published on any simple web server. We demonstrate our concept on Sentinel-2 data over the Netherlands. Our data cube covers 300×340 km in 400 time steps with all bands at full (10m) resolution and occupies around 4 TB of storage per orbit for all acquisitions till the end of 2021. The data cube is publicly available (https://geotiles.nl), but may also be used offline as it fits on any larger external hard drive.
The authors would like to acknowledge the support of the Netherlands Centre for Geodesy and Geoinformatics (NCG) via a NCG Talent Program grant.
A semantic Earth observation (EO) data cube refers to a spatio-temporal EO data cube, where for each observation at least one nominal (i.e. categorical) interpretation is available and can be queried in the same instance (Augustin et al., 2019). Until now, Advanced Very High Resolution Radiometer (AVHRR) imagery and derived information products have only been accessible via file-based access, requiring a significant time investment and expert knowledge to find relevant data for analysis. The 2-year project, SemantiX, has implemented a semantic EO data cube using AVHRR imagery, Copernicus Sentinel-3 imagery and derived information, complementing and expanding the heritage AVHRR time-series. This project is a collaboration between academia and the private sector with the Austrian companies Spatial Services and SPOTTERON. The geographic focus of this prototypical implementation is on the European Alpine region (ca. COSMO-1 extent), the AVHRR time-series spans ~40 years from both NOAA and Metop satellites, and all imagery is calibrated to top-of-atmosphere reflectance. Curated analysis results (e.g. maps, time-series curves, single values) based on this implementation are integrated into the existing citizen science application, Naturkalendar, by the Viennese company SPOTTERN, opening insights to an already engaged and interested public audience. To the best of our knowledge, this work has established the first EO data cube based on semantically-enriched AVHRR and Sentinel-3 imagery and is able to share these archives and derived information beyond the scientific domain (https://www.semantixcube.net).
Two categories of information are derived from AVHRR and Sentinel-3 imagery and provided in the semantic EO data cube implementation: three essential climate variables (ECVs) and a sub-symbolic semantic enrichment. ECVs critically contribute to the characterisation of Earth’s climate system’s state, interactions and developments. Remote sensing scientists at the University of Bern derived vegetation dynamics using NDVI, snow cover extent and lake surface water temperature, and integrated them into the data cube resulting in three climate-relevant time-series. In addition to the ECVs, automated semantic enrichment has been applied to all imagery resulting in generic, pixel-based spectral categories. These multi-spectral “colors” (i.e. stable, sensor-independent regions of a multi-spectral feature space) are not land cover classes, but can be considered a property of an object or land cover type. Paired with the temporal analysis that data cubes make possible, this generic semantic enrichment can be used in a convergence-of-evidence approach as the basis for building a diversity of land cover classes because they are independent from any defined ontology, application or sensor.
Multiple technologies and research developments were leveraged in order to build a semantic EO data cube using AVHRR and Sentinel-3 imagery. The semantic EO data cube implementation is based on a dockerised architecture, uses the Open Data Cube software, and serves as a single access point with several interfaces to facilitate various services. An existing Web-based front-end developed in a previous project, Sen2Cube.at, provides a GUI-based interface for semantic querying and analysis geared towards non-expert EO users. Jupyter notebook instances provide an interactive programmatic interface for analysis. The company SPOTTERON utilises their own citizen science application framework for showcasing climate-relevant results over ~40 years to give historical context to the observations app users are recording, particularly related to vegetation dynamics and snow cover extent.
Research from the 2-year research project, SemantiX, made this contribution possible and hopes to help close the gap between scientific inquiry into one of the longest imagery time-series of Europe and public understanding of the information they contain about our climate.
EUMETSAT is charged to support users in climate services, academia, and elsewhere, including the provision of information and training, and to operate and manage its Data Centre (the historical archive of EUMETSAT’s satellite data).
EUMETSAT and the network of Satellite Application Facilities (SAFs) provide time series of satellite-derived geophysical variables relevant for atmospheric monitoring. However, currently the data formats and dissemination of these data are not homogeneous, which probably presents a barrier for potential users.
A prototype Data Cube containing time series of several geophysical variables in a homogenous format could then help reduce such barriers, and explore the interest for analysis-ready data cubes among EUMETSAT users. Tools to generate analysis-ready data are also at the heart of the User needs identified within the European Copernicus program.
We present a prototype with a live demonstration of the generation of a Data Cube that addresses satellite datasets for air quality, atmospheric composition applications with 15 products from 4 missions and 5 sensors. Input datasets are served by different providers, and the demonstration is focused on the processing of the longest possible data series with daily to monthly resolution.
The solution for the generation of the Data Cube enables users to select only the geographic region of interest and bands they are interested in, and retrieves the products on-the-fly from the providers to generate the Cube as a single NetCDF4 file, conformant to a common data model. The principle is inspired by the Earth Observation “Data Cube on Demand” (Giuliani et al. 2019). The solution is intended to tackle the rapid evolution of the datasets composing the Cube, all of which are updated frequently (e.g. daily to monthly) by providers. Moreover, a stepwise growth of data volume is expected with the release of new datasets and products.
All software used to generate the Data Cube on demand is part of the EUMETSAT Data Tailor, and is delivered as software with Data Tailor to be deployed on users personal platform and as demonstrator on Cloud-based solutions and Virtual Machines.
The Earth is undergoing changes that humans have never seen before. At the same time, open-source software is making data availability and analysis accessible to an increasingly large audience, including researchers and citizen scientists. The Open Data Cube (ODC) is an open-source software platform designed to be highly adaptable to user needs I=n a variety of scenarios. This paper investigates the use of ODC as a tool for managing data and analyzing large scale phenomena at multiple resolutions, ranging from space based to microscopic. We present an open-source architecture for multiple-scale data analysis and examine the use case of investigating Harmful Algal Blooms (HABs). Specifically, we present an architecture designed to handle data at different scales: Earth Observation data from satellites (Landsat 8 and Sentinel 2), high resolution data from Unmanned Aerial Vehicle (UAS) systems, Internet of Things (IoT) data from ground based environmental sensors and water deployable buoys, as well as data from buoy-mounted high throughput microscopy systems to designed to image and identify the individual algal cells.
Data at scale has been an increasingly discussed and utilized method for calibrating remote sensor data, performing data fusion for increased spatial and temporal resolution, and enabling automated data collection, processing and interpretation. The Open Data Cube initiative community has been seeking such resources and the work presented seeks to establish an open source pipeline for processing data at multiple scales.
The efforts presented in this work encompass data collected from a range of sensors deployed by the authors. For ground-based measurements, IoT sensor systems crafted in house include land-based measurements of barometric pressure, temperature, and humidity, as well as water-based buoys that integrate a range of sensors that include incident solar activity, water temperature, GPS location, turbidity, chlorophyll fluorescence, and an automated onboard microfluidic microscope for counting and classifying plankton. Data is also collected RGB camera equipped UASs which are deployed on an as-needed basis. This data can be fused with available satellite data from platforms such as Landsat 8 and Sentinel 2, as well as higher resolution satellite providers.
We also present ODC based software tools to enable the indexing of imagery and correlation of geospatially tagged data from both satellite and UAS sources. Specifically, we demonstrate a containerized server for storing geospatially tagged environmental data that can be queried by the ODC, as well as open source reference designs for hardware which collect ground based environmental parameters.
Coastal areas are increasingly becoming more vulnerable due to economic overexploitation and pollution. The Italian Space Agency (ASI) supports the research and development of technologies aimed at the use of multi-mission EO data, in particular of the national COSMO-SkyMed Synthetic Aperture Radar and PRISMA hyperspectral missions, as well as Copernicus Sentinels, through the development of algorithms and processing methodologies in order to generate products and services for coastal risk management.
In this context, ASI has promoted the development of the thematic platform costeLAB as a tool dedicated to monitoring, management and study of coastal areas (sea and land). This platform was developed in the frame of the “Progetto Premiale Rischi Naturali Indotti dalle Attività Umana - COSTE", n. 2017-I-E.0 (https://www.costelab.it/en/homepage-en/), funded by the Italian Ministry of University and Research (MUR), coordinated by ASI and developed by e-GEOS and Planetek Italia with the participation of National Research Council of Italy (CNR), Meteorological Environmental Earth Observation (MEEO) and Geophysical Applications Processing (G.A.P.) s.r.l. The aim of the project was to define, develop and run in a pre-operational context, an integrated system that exploits Earth Observation data to support the management of coastal areas environmental processes and risks. The platform is addressed to the institutional, scientific and industrial users and allows the study, experimentation and demonstration of new downstream pre-operational services for the monitoring of the coastal area environment.
To address the main scope of the ESA Living Planet Session “C5.05 Earth System & EO Data Cube Services and Tools for Scientific Exploitation”, in this paper we focus on the Researcher User, and how the costeLAB platform and its collaborative virtual environment allow scientific exploitation of EO data for coastal studies and downstream applications, and the scientific output to be maximized on real use cases.
The costeLAB platform provides a common entry point for several web-based EO data processing in the field of coastal zone monitoring and emergency management, to generate and visualize products by means of consolidated algorithms that users can utilize for their duty tasks. In addition, costeLAB embeds a “collaborative virtual laboratory” (the “Virtual Lab”) for researchers and developers to share, test and demonstrate innovative algorithms in order to build new processing chains.
With regard to the “entry-point” function, the platform relies on an architecture that is based, as much as possible, on free-of-charge and open source solutions and standard protocols (Liferay portal CE 6.x, APEREO CAS 5.x, JupyterLab 1.x, YAWL, Geonetwork /Geoserver, Python, Java Spring Boot; Pellegrino et al., 2021), integrates a large set of processors and algorithms, and allows sourcing of multi-mission and multi-sensor EO data (ESA Sentinels, ASI’s COSMO-SkyMed and, in future, PRISMA) from image catalogues. The rationale is to “keep applications close to the data”, i.e. allowing users to access huge amount of EO data relieving them of demanding tasks for big data download and processing in local computers. Users are therefore able to generate reliable products by means of validated algorithms with reduced processing times. An exhaustive overview of the costeLAB consolidated products is provided by Candela et al. (2021) and at the project website (https://www.costelab.it/en/products/). Acting as the only interface for a wide spectrum of data source and products, costeLAB enables the integration of different processing routines and computing technologies, and aims to maximize the cost-benefit ratio through scalable cloud systems.
The user can interact with the platform in several ways. Through the product request interface and under several operational scenarios (see Pellegrino et al., 2021, for further details), the expert user may access the list of processors available in costeLAB and select the desired product for generation, the operational scenario and the input parameters. Upon completion of the generation process, the product is added to the catalogue and is made available to the users for reference and analysis. Once the product is in the catalogue, it can be searched and displayed at any time by the authorized user. This means that any product generated in costeLAB can also be accessed by “Researcher Users” with skills in data processing, algorithm and products development that exploit the platform to the test their own algorithms and codes.
This is indeed one of the main functionalities of the other facility provided by the platform for scientific exploitation, i.e. the “Virtual Lab”. This virtual environment exploits Docker containers, is a web interface based on IPython Jupyter Notebook, includes IPython development environment, and allows the use of Python, R and Fortran as programming languages. Therein, researchers can access satellite data, exploit computing resources, run predefined image processing routines, share or develop their own code (using open source packages, such as GDAL or ESA SNAP), e.g. to search, download and process Sentinel-2 data. In practice, using specific custom-developed notebooks, researchers can access the Sentinel archive of the DHuS directly from the Virtual Lab. Users therefore can launch operations, scripts and routines in the cloud, maintaining the concept of proximity of data to the processors, in the same way as it is provided for consolidated products.
To share results and ideas among different actors of the scientific community according to their role on the platform, full integration with the central authentication system (CAS) is made through the standard OpenId23 protocol. An example of what a researcher user can do with SaaS (Software as a Service) tools and resources made available in the Virtual Lab is discussed with regard to the use of Sentinel-2 images to generate a map of the morphological evolution of terrestrial coastal ecosystems in different years. In particular, Sentinel-2 near-infrared red-green-blue (NIR RGB) images collected over the Venice Lagoon were processed into the matching Water Adjusted Vegetation Index (WAVI) map in a Jupyter Notebook of the costeLAB Virtual Lab, wherein SEN2COR was used to obtain the Sentinel-2 L2A product (Villa et al., 2021). This experience demonstrates how codes developed by researchers can be run in the platform to generate new products and, in future, be transformed into consolidated processors.
During the costeLAB project, a wide portfolio of research activities was carried out (https://www.costelab.it/en/the-scientific-research/ and related references). These activities focused on the various components of the marine-coastal environment (land-sea interface), as follows:
• coastal erosion vulnerability (Bresciani et al.)
• beach & dunes volume changes (Fornaro et al.)
• extension and characterization of riverine and coastal plumes (Falcini et al.)
• morphological evolution of terrestrial coastal ecosystems (Villa et al.)
• algorithms and products for the coastal areas dynamic (Braga et al.)
• estimation and characterization of beaching in oil spill (Santini at al.)
• algorithms and products for land use/land cover changes (Pasquariello)
• algorithms and products for extracting weather marine forcing from EO data in near coastal water (Zecchetto)
• use of EO data for testing numerical models of sea state forecasts (De Carolis et al.).
Examples will be shown during the talk in order to demonstrate the breadth of novel approaches that were developed.
References
Candela, L., Coletta, A., Daraio, M.G., Guarini, R., Lopinto, E., Tapete, D., Palandri, M., Pellegrino, D., Zavagli, M., Amodio, A., Ceriola, A., Vecoli, A., Mantovani, S., Nutricato, R., Giardino, G. (2021) The Italian Thematic Platform costeLAB: from Earth Observation Big Data to Products in support to Coastal Applications and Downstream. Proceedings of the 2021 conference on Big Data from Space, Soille, P., Loekken, S. and Albani, S., eds., EUR 30697 EN, Publications Office of the European Union, Luxembourg, 2021, ISBN 978-92-76-37661-3, doi:10.2760/125905, JRC125131.
Pellegrino, D., Palandri, M., Zavagli, M., Avolio, C., Di Donna, M., Falco, S., Candela, L., Daraio, M.G., Tapete, D., Lopinto, E., Coletta, A., Amodio, A. (2021) costeLAB: a cloud platform for monitoring activities and elements of coastal zones using satellite data. Proc. SPIE 11863, Earth Resources and Environmental Remote Sensing/GIS Applications XII, 118630Z (12 September 2021); https://doi.org/10.1117/12.2599612
Villa, P., Giardino, C., Mantovani, S., Tapete, D., Vecoli, A., and Braga, F. (2021) Mapping coastal and wetland vegetation communities using multi-temporal Sentinel-2 data. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLIII-B3-2021, 639–644, https://doi.org/10.5194/isprs-archives-XLIII-B3-2021-639-2021
Danube Data Cube (DDC) is a regional data exploitation platform built on and follows the logic of the Euro Data Cube (EDC) infrastructure, a computational environment reflecting the Digital Twin Earth concept of the European Space Agency to support sustainable development.
DDC is a cloud-based platform with data and analysis tools focusing on the Danube Basin. As a regional platform service, it demonstrates the data cube technology's data storage and analysis capabilities, maximizing the benefit of the synergy of satellite and ancillary data with dedicated analysis tools.
The DDC concept includes extensive Machine Learning capabilities, including analytical tasks and decision support algorithms. One of the key themes of the platform is water management, from regional strategy and public information to field-level irrigation management.
Currently, DDC works on a regional and a local (field-level) showcase. Both are related to water management.
Water scarcity is an increasing problem globally, yet the efficiency of irrigation water usage is around 35%. With increasing uncertainties in weather conditions, irrigation strategies must be flexible. Companies must be prepared for many scenarios while increasing resource efficiency to maintain production and contribute to food security and sustainable development.
The regional showcase of DDC is designed to create a shared understanding and facilitate cooperation between authorities and research institutions about the region's hydrological and water management issues.
For research purposes, analysis-ready data is provided in datacube format for the whole region. Datacubes contains satellite data with meteorological and soil data, with the possibility to enhance the content even further with proprietary user data. A library of algorithms for DDC regional analysis is already available on the platform.
The platform supports jupyter labs, which means that proprietary algorithms can also be implemented and tested easily in the cloud.
The local showcase aims to improve irrigation efficiency in agricultural fields significantly.
DDC offers a sandbox with a graphical user interface, where users can try out different irrigation strategies under specific weather conditions. Irrigation strategies can be tested on historical and simulated data and even on real-time forecasts. Given the increasing unpredictability of weather conditions and climate in general, such a tool significantly impacts water usage efficiency.
While this online tool is already making an impact, the offerings of DDC does not stop here at manual experimenting. A training environment is created for AI agents to find the best irrigation strategies and even carry out the appropriate strategy (making optimal decisions under uncertain conditions) in real-time.
For research purposes, DDC provides:
• direct access to datacube service, which contains satellite data with meteorological and soil data in an analysis-ready format
• access to the irrigation sandbox for interactive experimentation
• a training environment for AI agents
• already trained AI agents
Title: Application of PRISMA satellite hyperspectral imagery for man-made materials classification in urban areas: a case study in Tuscany Region (Italy)
Keywords: PRISMA satellite imagery, spectral data acquisition, fieldwork, radiometric corrections, urban artificial areas classification
The Italian Spatial Agency (ASI) has launched in 2019 the new PRISMA mission (Hyperspectral Precursor of the Application Mission) which integrates the hyperspectral sensor with an additional sensor, capable to acquire not only panchromatic images, but also VNIR (Visible and Near-InfraRed) and SWIR (Short-Wave InfraRed) data [1]. A possible application of such data is the urban areas classification using spectral data from fieldwork acquisitions to train the algorithms and to validate the results.
Considering the novelty of this mission and the data collection carried out by PRISMA sensors, this research focused on the comparison between spectral data taken by a portable spectroradiometer and that obtained from PRISMA satellite reflectance imagery. The main purpose of this analysis is to classify the hyperspectral imagery in a way to evaluate the reliability of spectral data from the PRISMA mission for such a purpose.
The pilot area considered for the collection of hyperspectral data is mainly represented by the city of Prato and surroundings areas (Montemurlo, PT; Calenzano, FI; Campi Bisenzio, FI).
Materials chosen to be part of the samples list are common man-made objects used for roof covering and for paving public and private buildings or properties. The materials that were studied during the spectral data collection missions were solar cells, bitumen, asphalt (parking lots and highway), plastic (air-supported structures), metal roof covering, wood paving, clay roof tiles, clay paving and concrete (paving and roof tiles). The test site locations were defined considering various elements: areas with large covering, owners’ availability, security conditions and ease of access, material status and quality, presence of different materials within the same site when possible.
During the data collection several spectral signatures man-made materials in different locations were sampled. The collection was acquired using a portable spectroradiometer, namely ASD FieldSpec® 3, and then post-processed by the software ViewSpec® Pro.
At each site, a white reference sample was measured in order to compute the reflectance by rationing it to the raw DN data collected from the man-made materials.
In order to compare this data from fieldwork with that from the PRISMA mission, Erdas® Imagine 2020, L3Harris Technologies ENVI®, Google® Earth Engine and Esri® ArcMap were used to process the satellite imagery in terms of radiometric and geometric corrections. The Empirical Line Correction method was used to calibrate the PRISMA imagery of reflectance while, at the meantime, a pure translation shift was applied to the panchromatic, VNIR and SWIR images in order to obtain a satisfactory georeferencing. Then, two pansharpened VNIR and SWIR images, characterized by 5 meters of spatial resolution, were produced by the fusion with the panchromatic image.
The available PRISMA image was classified, and the materials of the urban area were mapped allowing to differentiate the roofs and the paving characterised by asphalt, concrete, clay tiles, bitumen, plastic, metal, solar cells and wood. The accuracy of classification was alto assessed through ground truth activities and photointerpretation of the imagery available on Google® Earth Pro.
Reference
[1] https://www.asi.it/en/earth-science/prisma/
DLR’s Earth Observation Center (EOC) is operating a burnt area monitoring service for Europe. It is based on mid-resolution Sentinel-3 OLCI (Ocean and Land Color Instrument) satellite imagery, research of methodologies and developments of processing chains (Nolde et al. 2020) and provides burnt area information twice a day in near-real time.
The service is fully automated and targeted at supporting both, rapid mapping activities and timely post fire damage assessment. It is designed incrementally, in a way that generated results are refined and optimized as soon as new satellite data becomes available. Besides the burn perimeter and detection date, the output data also contains detailed information regarding the burn severity of each detected burnt area
While the service is primarily intended for continental-scale monitoring of wildfire occurrence, the accumulated results allows the analysis of multi-year development trends regarding the mentioned parameters in addition.
This study, firstly, demonstrates the capabilities of the wildfire monitoring service, and secondly, analyses trends regarding fire extent, seasonality, and burn severity for the region of Europe regarding the recent years. The results are set in relation with findings derived for study areas outside Europe, namely California / USA and New South Wales / Australia.
The focus of the study is put on fire severity, since this information is not present in most common, large scale burnt area datasets. Yet, fire severity is a critical aspect of fire regimes, determining fire impacts on ecosystem attributes and associated post-fire recovery.
In addition to the analysis of large-scale wildfire activity, the results of the burnt area monitoring service can be utilized to monitor the spatio-temporal evolution of large lava flow events in near-real time, as for example the 2018 Lower East Rift Zone eruption at Kīlauea Volcano, Hawaiʻi or the 2021 eruption on La Palma.
Reference:
Nolde, M., Plank, S., & Riedlinger, T. (2020). An Adaptive and Extensible System for Satellite-Based, Large Scale Burnt Area Monitoring in Near-Real Time. Remote Sensing, 12(13), 2162.
"In the spring of 2014, armed conflict broke out in the Luhansk and Donetsk provinces of Ukraine. The use of ballistic and rocket artillery in this conflict has inflicted severe damages to wide areas of the provinces, with far-reaching impacts on migration, on public health, economy, agriculture, and on the environment of the region. The accurate mapping of artillery craters in the conflict is a crucial step in addressing these impacts, as it has the power to provide estimations of potential unexploded ordnance hot-spots on a conflict-wide scale. In turn, these estimations provide a valuable tool for post-conflict policies of restoration of the natural
and human landscapes. This includes the delineation of safe and unsafe zones, the production of information for returning civilian populations, and the establishment of policies of de-mining and aid distribution. The problem remains, however: how accurately can the effects of artillery - specifically its connection to the explosive remnants of war - be detected and mapped?
Utilizing very high resolution (VHR) multispectral satellite imagery combined with artificial intelligence, an automated artillery and rocket crater detection methodology was produced. The UNet semantic segmentation convolutional neural network was chosen as the classifier as it has demonstrated a robust ability to detect objects in medical applications, as well as more recently in remote sensing tasks such as celestial crater detection and terrestrial objects such as trees. In this project, we assessed the UNet CNN’s ability to detect contemporary artillery craters from VHR multispectral imagery. The UNet CNN is trained on rocket and artillery craters from the 2014 conflict. Success in detecting artillery and rocket craters was assessed using geographically independent model application, and a stratified random sampling technique to obtain binary machine learning metrics of sensitivity, precision, and F1 score. Size characteristics were also assessed to ascertain the changes in CNN classification proficiency and sensitivity to different crater sizes from varying weapon sources. The trained CNN model developed for this dissertation was able to find 89% of craters when compared with a human marker, indicating its initial proficiency at the task of crater detection. Crater size was found to have a positive correlation with all performance metrics, indicating that the model improved in the task of crater detection as the size of crater increased."
On a global basis, disasters strike regularly, in both developed and developing countries. Occasionally however, disasters take on catastrophic proportions, either because of particularly vulnerable populations, a dramatic natural event, or exceptionally unfortunate circumstances. Hurricane Katrina, the Haiti 2010 Earthquake, Typhoon Haiyan or the Great East Japan tsunami are examples of catastrophes that hold a special place in our collective memories as mega-disasters from which populations and governments take years to recover and rebuild. Since 2014, the Committee on Earth Observation Satellites (CEOS) has been working on means to increase the contribution of satellite data to recovery from such major events.
These efforts led to the creation of an ad hoc team on the use of satellites for recovery, co-chaired by the French Space Agency CNES and the World Bank/GFDRR, which has published an Advocacy Paper on the topic, as well as a four-year pilot following Hurricane Matthew which struck Haiti in 2016, causing catastrophic and long-lasting damage.
Following the successful demonstration of the technical merits of the Recovery Observatory Concept, CEOS together with the World Bank, UNDP, and the European Union created a RO Demonstrator Team and approved in 2020 a three-year demonstrator which aims to create a series of 3 to 6 ROs after major events between now and late 2023. The Demonstrator works on best efforts basis, with partners (satellite agencies, value adding companies, universities, government agencies) providing data, products, and services performed on a no exchange of funds basis.
A first test case on a small scale was implemented late 2020 after the Beirut Explosion of August 2020. A first full-scale RO activation was undertaken after the Eta and Iota Hurricanes of October and November 2020, covering areas in Honduras, Guatemala, Nicaragua and El Salvador, and a second RO activation took place in the days following the August 14th 2021 earthquake in Haiti.
As the first activations wrap up and new activations are considered, some success is already evident. Satellite-based products have been used to support efforts such as Post Disaster Damage and Needs Assessment (PDNAs) reporting providing faster and more accurate overall impact assessment for key sectors such as housing, infrastructures, agriculture and environment. A few conclusions are already evident:
• Satellite data is a useful resource for many stakeholders including those in charge of rebuilding after major disasters.
• Satellites offer unmatched range and reach, often enhancing rapidity and complementing field surveys over large areas.
• Satellites can provide regular reconstruction and rehabilitation progress monitoring that is less time consuming compared to a field survey
• Use of Satellite data in post disaster assessments are not well understood in the reconstruction and recovery world.
• More capacity development is required for local and international users to better understand which products could be useful.
The RO Demonstrator will continue to generate Recovery Observatories on a best effort basis between now and late 2023, before reporting back to CEOS and other partners on recommendations for increasing the use of satellite data for recovery.
Mt. Etna is one of the most active volcanoes on Earth that in the past few decades has erupted virtually every year. Mt. Etna has been appointed as Permanent Supersite since 2014 and it has renewed every biennial period from the Scientific Advisory Committee of the GSNL Geohazard Supersite and National Laboratory initiative of the Group of Earth Observation (GEO) and the Committee on Earth Observation Satellites (CEOS) Disasters Working Group. The Mt. Etna Supersite is managed by INGV Catania Section - Etna Observatory. Its implementation was largely based on the results of the EC FP7 MED-SUV (MEDiterranean SUpersite Volcanoes) project, which was aimed at supporting the implementation of the Supersite concept on both Mt. Etna and Vesuvius / Campi Flegrei volcanoes. The MED-SUV Data Portal is the main operational result of the project which aims to share a huge list of data and products. At the present day the Portal is moving into a new e-infrastructure in the framework of the European Plate Observing System European Research Infrastructure Consortium (EPOS-ERIC) in order to guarantee compliance for different domains within the European scientific community. Details about the new infrastructure will be presented in this contribution. In particular, it will be discussed how the new infrastructure will approach and manage the achievement of the FAIR data principles relevant to the access to the Superiste data and products.
The recent volcanic activity of Mt. Etna offered the opportunity to test, improve and implement new data and products associated with the Mt. Etna Supersite activity. Indeed, since the end of 2020, Mt. Etna started a new period of activity with frequent effusive and explosive activity at the summit craters. At the time of writing of this abstract, more than fifty episodes have been counted in less than one year, each one characterized by strong strombolian explosions passing to fire fountains, accompanied by lava flow emission.
The main outcomes related to this recent volcanic activity concerns the upgrade of (i) a WEB-GIS service for the interactive visualization of the mean LOS (Line Of Sight) velocity maps and the related time series, obtained by processing SAR SENTINEL 1A / 1B data and (ii) the exploitation of Plèiades imagery to monitor the recent volcanic activity.
The implemented WEB-GIS service is based on the Web and on a client-server architecture, and it was designed to offer quick and simplified access to perform ground deformation analysis, without having to use a "desktop" GIS software.
The WEB-GIS interface also supports base maps and accessory levels (for example the level of the Mt. Etna geological structures in vector format) with an on / off option to view.
The interface provides the basic options for query and interactive display of the time series, and a tool for analyzing and comparing the time series is also available.
The processing of Pléiades imagery, acquired in stereo or tri-stereo configuration, allowed for the calculation of 1-m spatial resolution Digital Surface Models (DSM) of the volcano. Moreover, by differencing successive DSMs obtained from the Pléiades data, the emplaced lava flow fields were mapped, the volume of the lava flow fields formed during 2021 eruptive activity was estimated, as well as the changes in the morphology of the summit craters, including the growth of the South East Crater that became the new top of the volcano.
Both SAR and Optical -based products were integrated with the multidisciplinary observing system managed by INGV to monitor the intense period of volcanic activity and to support the Disaster Risk Managers (DRM), e.g. Civil Protection and local authorities in their activity.
Figure Caption: (A) (upper part) Single Time Series Visualization; (lower part) Snapshot of Comparing Time Series tool. (B) (upper part) Triplet of Pléiades data processed to obtain the 2021 Mt. Etna DSM (lower part).
Between 12th and 15th July, 2021 the heavy rain brought by low-pressure system Bernd in Western Europe caused many catastrophic floods. The affected countries include Germany, United Kingdom, Austria, Belgium, Croatia, Italy, Luxembourg, the Netherlands, and Switzerland. The total reinsurance loss could be up to $3 Billion; the economic damage was expected to be around $6 Billion. The casualties amounted to over 240 people, 196 of which came from Germany. On 16th July EFTAS was contracted with Nordrhein-Westfalen (NRW) to detect the flood areas in the state using remote sensing.
In this presentation, we will propose an in-time flood monitoring system based on SAR data for future flood disaster management. Our works to detect flood in NRW will be presented as case studies. SAR plays the major role in our monitoring system for two reasons. First, an active imaging SAR is independent from illumination and slightly disturbed by the weather. Second, the techniques for flood detection have been long refined and standardized in national and international institutes. The whole process can be automated and accelerated by optimizing algorithms, enhancing soft- and hardware, and simplifying manual operation. However, the temporal gap between event occurrence and image acquisition ranges opportunistically, says, from minutes to more than one day for Sentinel-1. This uncertainty hampers the practice in rescue operations or in a monitoring service, e.g., Copernicus Emergency Management Service and DLR Flood Service.
Based on our experience and knowledge, we believe a so-called (near) real-time satellite-based flood monitoring has not yet existed for civilian use. A real-time monitoring could only happen if a geosynchronous SAR stands by 24/7 like a reconnaissance satellite. Our aim is to launch an "in-time" monitoring service in cooperation with the Capella Space constellation. Capella is capable of delivering VHR SAR imagery over a venue after an order request is placed in 6 hours on average. This means, an average of 4 images in 24 hours is available for flood monitoring, which is unprecedented so far. The delivery time will be further shortened in future with improved processing capabilities and expansion of the satellite constellation. This advantage will be only exclusive in our service for international market in the near future. The key is to integrate and automate the image supply and the procedure of flood detection into an efficient customer-oriented service. Last but not least, we will also propose our strategies to tackle the difficulty in flood detection for built-up and vegetated areas.
The need to optimise productivity in agricultural and forestry resources has led to a progressive increase in the development, evolution and uptake of EO based products by the agriculture and forestry sectors. These efforts are key to the achievement of the objectives set out by several of the UN Sustainable Development Goals (SDGs): SDG2- Zero Hunger; SDG4 - Sustainable Consumption and Production; SDG6 - Clean Water and Sanitation; SDG13 - Climate Action and; SDG15 - Life on Land.
In NextLand an attempt has been made to develop a wide set of operational midstream agriculture and forestry EO based services under a common service delivery platform, leveraging on GEOSS and Copernicus data and products, which can be complemented by the assimilation of other very high resolution EO and in situ data streams. The focus of this presentation is on forestry products in NextLand project that include forest change detection (deforestation and single tree cut), forest fire burn scar, forest density and statistics, tree health indices, forest classification. Products overview will be presented that gives the users a good overall idea about the product’s robustness.
Forest Change Detection product serves to calculate the area of forest loss that is very useful for governmental, inter-governmental, private and non-governmental sectors for better decision making. Large-scale deforestation is an extremely harmful practice because of its direct impacts on local biodiversity and terrestrial climate. This activity is very common in underdeveloped countries with large forested areas, causing damaging consequences, such as increasing atmospheric carbon, droughts, and the extinction of important vegetal and animal native species. Forest Burn Scar refers to areas that are destroyed by a forest fire, which is one of the most severe natural hazards in the forestry sector. It impacts ecology structure, atmospheric systems, as well as have detrimental effects on the living environment. Detecting and assessing the spatial extent and distribution of burn scar supports forest managers to process efficient vegetation recovery and post-fire management. Forest Density and Statistics contains several products on a monitoring platform for tree growth trend. They provide useful information for the forest managers and the wider public about the tree crown density, extent or sparsity of trees. Tree Health Indices cover Normal Difference Vegetation Index (NDVI), Fraction of Photosynthetically Active Radiation (fPAR), fraction of green vegetation COVER (fCOVER), Leaf Area Index (LAI) and Canopy Chlorophyll Content (CCC). They are mainly used to support decisions on tree and forest health. Forest Classification product provides information about location of selected tree species. It is a used in forest management to estimate growth and to monitor forest health.
With the exception of the product Forest Density and Statistics', all of the products described above are generated based on the Sentinel-2 data. The use of data provided free of charge reduces the overall cost of the service. At the same time, reliance on data from the satellites of the European Copernicus programme ensures continuity and regularity in the provision of source data. Various methods of satellite data processing were used in the development of products, from a simple calculation of indicators, e.g. in the case of Forest Fire Burn Scar and NDVI, to advanced machine learning models in the case of the Forest Classification.
In Germany and many other industrial nations, it is a political goal to reduce area consumption and land take. Decision-making and area statistics in this context are mainly based on official cadastral data. In Germany, this source of data has two main drawbacks: First, it is produced and updated at different temporal intervals in the federal states, such that a Germany-wide dataset never depicts one single reference year. Second, it mainly holds information on land use rather than land cover. This means that changes between years may occur in the data even if the physical properties of an area have not changed. Because of this, the “incora” project investigated the potential of Copernicus Sentinel-2 data to provide annual land cover and imperviousness maps from which spatial indicators of land take, urbanisation, and settlement and infrastructure can be derived. The overall project and the geospatial models for indicator calculation are presented in a dedicated companion contribution, while this poster presentation will serve as a complement to highlight in-depth the classification and imperviousness mapping approach.
The classification approach can be summarized as follows:
To minimise the need for preprocessing, we made use of Sentinel-2 Level3A WASP data provided by DLR. This data represents atmospherically corrected monthly cloud-free temporal mosaics of standard Sentinel-2 tiles. As cloud coverage prevents truly cloud-free mosaics for every month (especially during winter), a preselection of suitable months and further removal of remaining clouds was performed. Spectral indices were calculated from the time series and temporal index statistics (minimum, maximum, median, range) were derived. Next, an automatic training data generation approach was implemented. Therefore, a set of rules was applied for each of the six target classes high vegetation, low vegetation, water, built-up, bare soil, and agriculture based on auxiliary datasets (OpenStreetMap, Copernicus High Resolution Layers, S2GLC Land Cover Map of Europe) as well as spectral index statistics of the Sentinel-2 input data itself. From the resulting potential training areas, 50,000 pixels were sampled randomly to serve as training input for a Random Forest classificator.
The final land cover classification maps were validated for the federal state of North Rhine-Westphalia, as its open data policy allowed for direct access to official data to serve as reference. We found overall accuracies of 88.4%-92% across years with high accuracies for the class “built-up” (89.8% - 99.3%) which is the most relevant for the analysis of settlement and infrastructure.
Parallel to the land cover classification approach, we also carried out an imperviousness mapping based on a spectral unmixing algorithm. The imperviousness products estimate the soil sealing per pixel and are mapped as the degree of imperviousness in the range of 0-100%. As built-up areas feature semi- or fully sealed surfaces, we used the imperviousness layer to represent built-up land. Imperviousness change layers were then generated to detect built-up land change between years, which is represented as the degree of imperviousness change above an empirically derived threshold. One key advantage of this approach is that it is not prone to misclassifications that might be present in the annual classification products due to the discretization of spectral information. The main disadvantage is that this change product does not hold information on other land cover types.
Both classification and imperviousness change products complement each other regarding information content and could be further used for the calculation of static and dynamic spatial indicators of area consumption and land take.
The Agenda 2030 for sustainable development, including the Global Sustainable Development Goals (SDGs), was adopted by the heads of countries and government at the UN Sustainability Summit in New York in September 2015. The global SDGs will set the course for the global community and will contribute to sustainable development by 2030. Quantifiable indicators are used to evaluate and measure the progress of the 169 SDG targets within and across countries. The SDGs framework currently consist of 231 indicators which are based on demographic and statistical data or on data from models or surveys. While some countries have means to measure these Indicators, others lack the data, methods or relevant actors / stakeholders responsible for specific indicators, which challenges the development of consistent and comparable information. The Inter-Agency Expert Group on SDG Indicators (IAEG-SDGs) developed the SDG Global Indicator Framework, a Tier based classification system, that categorizes indicators into 3 Tier classes based on the level of data availability and methodological development. The UN recently encouraged the use of Earth Observation (EO) data as an alternative data source for monitoring and supporting the implementation of the SDGs. The current fleet of available EO satellites, particularly those of the EU’s Copernicus program, provide freely available data from which timely statistical results can be derived while providing a consistent means of reporting and measuring the SDGs.
The Cop4SDGs (Copernicus for SDG) project was launched between the Federal Agency for Cartography and Geodesy (BKG) and the German Environment Agency (UBA) and is funded by the German Federal Ministry for Environment, Nature conservation and Nuclear Safety. The aim of the project is to examine the extent to which the SDGs can be verified and reported by using Copernicus data and products. In addition, data and indicator gaps in the national reporting process are to be closed.
In the first phase of the project, a systematic overview of the current state of art and knowledge on satellite-based monitoring of sustainability indicators was developed. On the basis of global and national sustainability indicators, a comprehensive review has been carried out for selected indicator areas. As a result, 14 indicators were identified that can be measured directly or indirectly using EO data. For two of those indicators (6.6.1 Change in the extent of water-related ecosystems over time, 3.9.1 Mortality rate attributed to household and ambient air pollution) initial feasibility analyses were carried out as a starting point for further discussions with those responsible for reporting. In addition, further indicators related to goal 15 are analyzed. Besides the Sentinel data, Copernicus Land Service products such as the Corine Land Cover, the High Resolution Layer Water and Wetness and the High Resolution Vegetation Phenology and Productivity for calculating the different indicators have been explored. Methods and storymaps will be developed to help with the future calculation of these indicators. Subsequently, the potentials of transferring the results for other environmental policy measures will be examined and policy recommendations will be developed.
Solid waste management is an essential utility for sustainable urban living, and meanwhile long remains a governance challenge that requires holistic solutions, particularly in the Global South where population density is very high (Ferronato & Torretta, 2019). The objective of this study is to develop a new approach using Landsat-8 and Sentinel-2 time series to investigate the spatial distribution of open waste dumps in Vietnam. There has been clear scientific evidence of water and soil contamination caused by open waste dumping (Eguchi et. al., 2013, Sharma, Gupta & Ganguly, 2018), which is a dominant waste disposal method in many South Asian countries. Household waste in those places often contains a high proportion of organic waste (Pfaff-Simoneit et al., 2021) and is frequently disposed of with certain waste types such as wastes from Electrical and Electronic Equipment (WEEE) and Construction and Demolition (C&D) waste. Not only do the open dumps pollute surface and groundwater sources via leachate migration with substances such as heavy metals, PBDEs, HBCDs, and other hazardous substances (Alam, A. et. al., 2017), it also releases harmful gases into the atmosphere, even more so than the controlled landfills, including methane which has 25 times higher GWP than CO2 (Ferronato & Torretta, 2019). Besides, open waste dumping is one of the major sources of marine plastic pollution: It is estimated that up to one million tons of plastic waste are released into the waters off Vietnam's coast via rivers every year (Meijer et al., 2021).
Despite a significant knowledge gap on the spatial distribution of open dumps owing to insufficient data, very limited research has exploited the potential of open remote sensing data to trace open dumping activities both spatially and temporally. Besides, most of the relevant research focuses on risk assessment of the landfills at known locations. This present study aims to assess the feasibility of open dump detection using Landsat-8 and Sentinel-2, with the use of thermal anomaly and methane proxy for thresholding, which reduces the dependency on labelled data for training machine learning models. It aims to detect dumping activities in a scalable manner using hierarchical classification by retrieving thermal radiation and methane columns based on the time series. This work explores the potential of cloud computing in Google Earth Engine for time-wise spatial analysis. First, potential open dumping sites, namely barren soil, are extracted using Sentinel-2 imagery with the random forest classifier, trained with labelled land-use data over the region. Then, thermal radiation and methane columns are derived monthly from 2019 to 2020 using band 10 from Landsat-8 and band 11, 12 from Sentinel-2. The methane indicators are developed using Multi-band–multi-pass (MBMP) retrieval, which was proposed to monitor methane point sources (Varon, D. J., et. al., 2021). The output time series, together with texture measures of the satellite imageries, is used for thresholding on the potential sites to extract probable open dumping sites. The results indicate superior performance of the present model to overcome the hurdles of limited training data and heterogeneity of open dumping sites, in comparison to the conventional multi-class classification, which is strongly subjected to training data sufficiency, as well as class-balance of the dataset. The present study proposed earth observation-based approach to investigate the development of open dumping activities in Vietnam, which can potentially bridge the gap between local activities and regulatory efforts, and can be expanded to a larger scale, which could contribute to risk analysis, urban planning, as well as marine litter tracing with further interdisciplinary efforts.
Despite the lack of systematic spatial data on open dumping activities in Vietnam and its critical importance on sustainable solid waste management and policy formulation, there is a significant knowledge gap on a scalable method to detect anomalous and heterogenous open waste dumps using remote sensing data. The application of thermal anomaly and methane indicator in a hierarchical classification outperforms the conventional machine learning approach for the detection of open dumping activities which can be potentially applied on a national scale.
References
Alam, A., Tabinda, A. B., Qadir, A., Butt, T. E., Siddique, S., & Mahmood, A. (2017). Ecological risk assessment of an open dumping site at Mehmood Booti Lahore, Pakistan. Environmental Science and Pollution Research, 24(21), 17889-17899.
Eguchi, A., Isobe, T., Ramu, K., Tue, N. M., Sudaryanto, A., Devanathan, G., ... & Tanabe, S. (2013). Soil contamination by brominated flame retardants in open waste dumping sites in Asian developing countries. Chemosphere, 90(9), 2365-2371.
Ferronato, N., & Torretta, V. (2019). Waste mismanagement in developing countries: A review of global issues. International journal of environmental research and public health, 16(6), 1060.
Kapinga, C. P., & Chung, S. H. (2020). Marine plastic pollution in South Asia. Development Papers, 20-02.
Meijer et al., 2021. More than 1000 rivers account for 80% of global riverine plastic emissions into the ocean. Science Advances 7(18). DOI: 10.1126/sciadv.aaz5803
Pfaff-Simoneit, W., Ziegler, S., Long, T.T. 2021: Separate collection and recycling of waste as an approach to combat marine litter - WWF pilot project in the Mekong Delta, Vietnam, in: Kuehle-Weidemeier, Matthias (2021): Waste-to-Resources 2021, 9th International Symposium Circular Economy, MBT, MRF and Recycling, online conference, ICP Ingenieurgesellschaft mbH, Karlsruhe 2021.
Sharma, A., Gupta, A. K., & Ganguly, R. (2018). Impact of open dumping of municipal solid waste on soil properties in mountainous region. Journal of Rock Mechanics and Geotechnical Engineering, 10(4), 725-739.
Tun, T. Z., Kunisue, T., Tanabe, S., Prudente, M., Subramanian, A., Sudaryanto, A., ... & Nakata, H. (2021). Microplastics in dumping site soils from six Asian countries as a source of plastic additives. Science of The Total Environment, 150912.
Varon, D. J., Jervis, D., McKeever, J., Spence, I., Gains, D., & Jacob, D. J. (2021). High-frequency monitoring of anomalous methane point sources with multispectral Sentinel-2 satellite observations. Atmospheric Measurement Techniques, 14(4), 2771-2785.
Land degradation neutrality in Agenda 2030 is the scientific, politic, economic, and social UNCCD conceptual framework in sustainable development in epoch of world economy decarbonization – net zero carbon 2050. For monitoring this LDN process to decision making was proposed SDG 2.4.1. and 15.3.1 indicators on international and national levels.
To calculate NDVI index for 8 Ukrainian regions with using CREODIAS platform the Sentinel-1,2 and Landsat-8 mission images and in-situ Ukrainian data was analyzed.
Calculation of the NDVI index, which is available from EO data of Landsat 8, Sentinel 1, 2, comes first for different regions of the Ukraine: Chernihiv, Mykolaiv, Dnipropetrovsk, Kherson, Vinnytsia, Zhytomyr, Cherkasy, Sumy. In addition, NDVI is often used in Ukraine as around the world to monitor drought, forecast agricultural production, assist in forecasting fire zones, and desert offensive maps. Farming apps, like Crop Monitoring, integrate NDVI to facilitate crop scouting and give precision to fertilizer application and irrigation, among other field treatment activities, at specific growth stages. NDVI is preferable for global vegetation monitoring since it helps to compensate for changes in lighting conditions, surface slope, exposure, and other external factors.
The interpretation NDVI indexes on the examples of October 2021 normalized difference vegetation indexes for different regions of the Ukraine, for example Chernihiv, Mykolaiv, Dnipropetrovsk, Kherson, Vinnytsia, Zhytomyr, Cherkasy, Sumy is underpinned by a conceptual model that perceives land as a socioecological Ukrainian system (a coupled human-natural system); hence, labelling a land unit in Ukraine as degraded requires a synergy of utilitarian (human-driven) and ecological (ecosystem function and structure) in the context of SDG 2.4.1 and 15.3.1 index calculation. Land cover classification systems derived from EO Data from CREODIAS platform and in-situ data are important tools to describe the natural and urban environment of the Ukraine for different science research demands and effective agriculture workflow process organization [1, 2].
The authors acknowledge the funding received by Horizon 2020 e-shape project (Grant Agreement No 820852).
REFERENCES
1. Nataliia Kussul, Mykola Lavreniuk, Andrii Kolotii, Sergii Skakun, Olena Rakoid & Leonid Shumilo (2020) A workflow for Sustainable Development Goals indicators assessment based on high-resolution satellite data, International Journal of Digital Earth, 13:2, 309-321, DOI: 10.1080/17538947.2019.1610807.
2. N. Kussul, A. Shelestov, M. Lavreniuk, I. Butko and S. Skakun, "Deep learning approach for large scale land cover mapping based on remote sensing data fusion," 2016 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2016, pp. 198-201, doi: 10.1109/IGARSS.2016.7729043.
Creating sustainable development index combining satellite data with other data sources: application on the assessment of the sustainability of the global tourism industry
Abstract
Since its creation, the aim of the HDI (Human Development Index) was to rank countries based on human, economic, health and education data.
In 2010, the Human Development Report introduced an Inequality-adjusted Human Development Index (IHDI). While the simple HDI remains useful, it stated that "the IHDI is the actual level of human development (accounting for inequality)", and "the HDI can be viewed as an index of 'potential' human development (or the maximum IHDI that could be achieved if there were no inequality)".
These kinds of improvements are crucial but before 2020 there was no place for the environment in the calculation of these indicators. Yet we can consider the following correlation: generally, higher the HDI, the stronger is the pressure on the environment.
Tourism is one of the pillars of the modern economy and a significant vector of human development. It constitutes more than 10% of global GDP. The number of international tourists is expected to hit the 1 Billion bar in 2020 and forecasted to rise to 1.8 Billion in 2030, making it crucial to find efficient ways to handle this growth and preserve the fragile destinations. Additionally, more than 65% of European travelers have declared that they are striving to make their travels more sustainable but do not find the right information or the possibility to assess their environmental footprint.
The present COVID crisis is underlining the importance of steering the tourism activities into sustainable development, on a global scale. Either for environmental reasons or socio-economic motivations, sustainability is now at the core of all tourism organizations and roadmaps.
We present here the implementation of a unique sustainable development indicator for the tourism industry, the Tourism Sustainable Development Index (TSDI). It combines satellite earth observation data, in-situ measurements and statistical data. These socio-economic datasets combined with environmental data enables the computation of a single index assessing the sustainability of tourism in a given area.
This indicator is resilient regarding data gaps and at the same time flexible to accommodate heterogeneous and new data sources over time. Moreover, the TSDI is meaningful from a scientific perspective and reflects the correlation between the economic development of the tourism activity and its environmental impact.
The TSDI mathematical formulation combines the human development and environmental impact factors. The environmental impact factor calculation includes earth observation satellite data, especially air quality from Sentinel-5p atmospheric measurements, water quality from Sentinel-3 oceanography sensors and vegetation cover from Sentinel-2 optical images. The use of satellite remote-sensing data is key and presents many benefits : space data is a reliable and objective measurement of the state of our planet, it can be systematically applied anywhere without depending on ground situation and with no additional cost thanks to the open and free policy of the Copernicus program. Space data is combined with other sources, evaluating important environmental factors such as CO2 emissions and biodiversity, to provide a complete picture of the environmental state of an area.
The human development factor includes indicators on urbanization and tourism activity as well as classical human development indicators already included in the HDI such as education index and life expectancy.
The formulation of the TSDI includes a notion of “boundary” that enforces the idea that if a region, location or a country is doing well along one dimension, it is not allowed to do worse on other parameters. This calculation method allows to “deactivate” a parameter if it is under the boundary, limiting the impact on the final index of individual factors and favoring areas where all sustainable development factors are under control. There is no free meal with the TSDI !
A large part of the urban population in Low- and Middle-Income Country (LMIC) cities lives in deprived urban areas (DUAs), i.e., areas that are deprived in terms of housing and environmental conditions, services, infrastructure etc. (UN-Habitat, 2016). For example, in African cities, the urban poor form the majority of the urban population. However, data on the location and characterisation of DUAs are commonly not available. The absolute and relative share of population living in DUAs calls for acknowledging their existence and understanding the local conditions to develop tailored improvements. Their monitoring is also a global challenge linked to the Sustainable Development Goals (SDGs).
We present the joined efforts of two initiatives: the Integrated Deprived Area Mapping System (IDEAMAPS) network (https://ideamapsnetwork.org/) that leverages the strengths of the four current approaches for DUA mapping, and the SLUMAP Earth Observation (EO) project (https://slumap.ulb.be/) that aims at overcoming limitations to DUA mapping posed by the high cost of imagery acquisition and processing.
To support routine and accurate mapping and characterising of DUAs, IDEAMAPS network developed the Domain of Deprivation Framework to identify relevant geospatial and EO data for urban deprivation mapping and analysis (Abascal et al., 2021). This framework builds on existing deprivation frameworks (e.g., the English Deprivation Index). The main rationale to model deprivation not as a binary phenomenon, but as a continuous layer, is the high level of uncertainties of slum versus non-slum maps, as even local experts have difficulties agreeing on boundaries. Existing deprivation mapping frameworks typically use census data, with availability issues and low temporal granularities, which quickly go out of date in fast growing and transforming LMIC cities. The IDEAMAPS Domains of Deprivation Framework groups locally-meaningful DUA indicators into 9 domains at 3 scales. Two domains reflect deprivation measured within households. Four domains reflect area-level deprivations (social hazards & assets, physical hazards & assets, unplanned urbanisation, and contamination). Three domains reflect aspects of deprivation that relate to the connectivity to the city (i.e., infrastructure, facilities & services, and governance). A guide for authorities and users (https://ideamapsnetwork.org/toolkit-goverment) provides guidance for the operationalisation of all domains building on openly available geospatial data (e.g., night-time lights, air pollution) and contextual image features (e.g., using Sentinel-2 imagery).
Therefore, IDEAMAPS and SLUMAP work on DUA models that utilise open geospatial and EO data. In particular, EO data allow for routine mapping of DUAs and characterising aspects related to the urban environment (e.g., waste accumulations, hazard), urban morphology (e.g., built-up densities, availability of open/green spaces) and infrastructure (e.g., availability of street-lights, road access). EO approaches are commonly top-down, with no or limited user interactions, whereas our framework combines EO data with user engagement and the inclusion of data from local communities, acknowledging the importance of citizen science. Thus, the information needs and requirements of different user groups are the guiding principles for the development of a flexible DUA mapping system.
Results of machine learning models, using classical algorithms such as Random Forest as well as popular deep learning models, show that with open and freely available EO data, DUAs can be mapped and characterised at city scale. We showcase results for several African cities (e.g., Nairobi, Kisumu, Lagos). The degree of deprivation mapping approach uses a gridded system that labels each grid cell with a continuous deprivation index value (between 0-1), showing the least to the most deprived grid cells. Local data collected together with community groups in the respective cities are used to train and validate the models. Outputs show that patterns of deprivation match well with the location of locally known “slum” areas and also highlight other DUAs (e.g., atypical slums, low-income housing areas). The continuous scale of least-to-most deprived obfuscates the boundaries of slums or informal settlements (reducing the likelihood to contribute to stigmatisation), while supporting multiple use cases for these maps. This flexible mapping system enables local users, e.g., for local SDG 11.1.1 monitoring, to use locally meaningful thresholds to classify results into binary maps of slums versus non-slums. This can be done within a local engagement process and is not based on the assumption of EO experts with limited to no local contextual knowledge. Such a locally acceptable binary classification, could be used for regular local SDG reporting.
The proposed Integrated Deprived Area Mapping System (IDEAMAPS) framework (https://ideamapsnetwork.org/) provides a flexible gridded mapping system, based on this concept. SLUMAP showcased the potential of free EO-data for, on the one hand, producing city-scale maps that localise the diversity of deprivation, and on the other hand, mapping their characteristics with a high level of detail. The proposed approach has the advantage to be scalable, transferable and allows for local adaptations in the form of a user-centered mapping approach. Results support cross-disciplinary information needs on DUAs and show EO data's potential to be combined with geospatial data for local SDG monitoring.
References:
Abascal, Á., Rothwell, N., Shonowo, A., Thomson, D. R., Elias, P., Elsey, H., . . . Kuffer, M. (2021). “Domains of Deprivation Framework” for Mapping Slums, Informal Settlements, and Other Deprived Areas in LMICs to Improve Urban Planning and Policy: A Scoping Review. Preprints 2021, 2021020242. doi:10.20944/preprints202102.0242.v1
UN-Habitat. (2016). Slums Almanac 2015-16. Tracking Improvement in the Lives of Slum Dwellers. Nairobi, Kenya.
In recent years there has been increased interest in the concept of “Smart Statistics”, which can be viewed as the future extended role of official statistics, whereby traditional data sources (survey and administrative data) are complemented by information from sensors (such as satellite imaging and a host of environmental sensors), smartphones (including GPS), behavioral data (e.g. data from online searches, websites’ visits and activity such as travel or accommodation payments) or even social applications data (comments on social media, etc.). These data sources can provide entirely new insights into social and economic trends to drive public policy making.
Earth Observation (EO) can constitute an important component of smart statistics, being a subset of the aforementioned Big Data. Organisations such as the United Nations Statistical office (UNSTAT), the European Statistical Office (Eurostat, e.g. through the ESSNet Big Data II project with explicit EO activities) as well as many national statistical institutes/offices (NSIs/NSOs) and supporting organisations are currently seeking to incorporate satellite imagery and other ΕΟ data sources (such as models and in situ platforms) into their operational workflows. The SDG framework is of particular interest providing a common ground for exemplifying such interactions as the SDG Goals and indicators are pursued both from the statistics and EO communities. ESA has supported several pilot studies in this intersectional area, such as EOStat-Poland, Sen4Stat, EO4Poverty and EcoServe, primarily focused on national agriculture statistics and assessment of environmental services. Recognising the need to go beyond such pilots, in 2021 ESA released an ITT on “EO for Smart Statistics”, which will be implemented through the GAUSS project.
GAUSS (Generating Advanced Usage of Earth Observation for Smart Statistics) is an 18 month project led by the National Observatory of Athens (NOA) working together with FMI (Finish Meteorological Institute), IGIK (Institute of Geodesy and Cartography of Poland) and Evenflow (a Brussels SME). It aims to provide specific demonstrations of the use of EO to meet key reporting needs of the corresponding national NSOs in the areas of air quality statistics (AQ), water statistics and green indicators for natural capital. It will also develop best practices for an interested user community to support further development of such workflows and solutions beyond the scope of this project.
At the core of the GAUSS project is a set of case studies which meet real identified needs of the national NSOs with the underlying aim to showcase the added value EO brings in current workflows in these fields as well as ensure the robustness of the results, taking into account the requirements of official statistics. In Greece, the project will develop high resolution AQ statistics for key atmospheric pollutants (relating to AQ Directive reporting and SDG 11.6.2), working at Local Administrative Unit level rather than the currently available coarse regional statistics. To do this it will fuse EO data from Sentinel-5P, regional models of the Copernicus Atmospheric Monitoring Service (CAMS) and data from a national network of low-cost AQ sensors. In Finland (relating to SDG 6.3.1), the project will fuse Copernicus data with in-situ data from webcams and other sensors to create an improved set of products on snow cover throughout the year. It will also create a novel product for assessing hydrological drought, based on the fusion of satellite altimeter data (Jason and Sentinel-3) with in situ measurements. In Poland, a set of statistics on the availability and quality of green areas at commune level, a key parameter for assessing regional wellbeing (relating to Goal 3), will be created using a range of satellite sensors. In addition, the project will replicate the AQ and hydrological drought indicators workflows for Poland, confirming the transferability of the methods.
Based on these case studies, the project will elaborate a future roadmap with recommendations for further integration of EO into Smart Statistics. This will take into consideration not just the technical issues remaining to be addressed, but also the operational and regulatory barriers to increased adoption of such products in official statistics. To help identify these barriers, the project will be supported by a steering group on which key statistical agencies will be represented. This will also allow the project to exploit synergies with other initiatives in this area.
In conclusion, EO data has the potential to meet many key needs of the European NSOs. By using key case studies to explore the practical barriers to increased adoption, the GAUSS project aims to define a pathway towards real operational use of such data in official statistics.
Accurate urban population distribution maps are necessary pre-requisites for a wide range of applications related to urban sus-
tainability and planning, epidemiology, natural hazards (population at risk) and crucial elements for the monitoring of Sustainable
Development Goals (SDG). However, the quality of population data in data scarce environments such as the Global South (GS)
is unreliable both in terms of temporal and spatial consistency. The disparaging effects of this data gaps are most evident in Sub-
Saharan Africa (SSA) where census data are not easily accessible, often outdated or not available at spatial levels that allow for
sophisticated analyses. International efforts such as WorldPoP (Tatem, 2017), GHS-PoP (Freire, Halkia) and LandScan (Dobson et
al., 2000) have helped mitigate this gap by providing openly-accessible, global population distribution products at relatively high
spatial resolutions (100m-1km). Nonetheless, their quality with respect to the intra-urban level is limited, as they were mostly
designed for large scale analysis (i.e., global or national level). At the same time, SSA is facing a rapid urbanization shift with
current estimates placing more than 60 % of the African population in cities by 2050. This has led to the proliferation of deprived
neighbourhoods that often lack basic services such as adequate open space and access to clean water. As recent research has shown,
deprived urban communities are vastly underestimated in current global population products (Thomson et al., 2021), which severely
hinders efforts to address the needs of urban residents and enhance evidence-based policy making. Thus, the need to better represent
the urban population both in terms of accuracy and spatial detail is imperative.
In this research, we harness the power of Deep Learning (DL) methods and openly accessible EO data such as Sentinel-2 MSI
imagery to model and map urban population patterns in a selection of SSA cities at a fine spatial scale. Based upon disciplinary
knowledge, particularly in aspects of the required EO data combinations, predictive performance, transferability and parsimony, our
goal is to create the building blocks for creating reliable urban population products, tailored to meet the needs of SDG indicators
such as accurately measuring the population living in informal settlements.
As a proof of concept we apply our framework in Dakar (Senegal) and Ouagadougou (Burkina Faso) located in the Sahelian zone of
Africa. Both cities have exhibited strong urban and population growth trends in the last decades and provide diversity with respect
to building patterns and urban morphology.
To provide training and validation data for our DL models, we make use of the 2013 census in Dakar which was available at a
neighbourhood level (1250 administrative units), and a detailed population survey in Ouagadougou available at a coarser scale
(55 administrative units). Based on these sources, existing high-quality gridded population datasets are available at a 100 meter
resolution that were derived from very-high-resolution satellite data and served as the building blocks to feed our DL models
(Grippa, 2018).
We propose a DL approach that uses Sentinel-2 MultiSpectral Instrument (MSI) patches of size 100 x 100 pixels (i.e., 1 km2 ) as
inputs to a residual neural network, commonly known as ResNet (He et al., 2016). Specifically, the first layer of the ResNet-18
architecture was modified to accommodate the 10 m spectral bands of Sentinel-2 (blue, green, red and near infrared). Furthermore,
ReLu is used as activation function for the output layer in order to prevent negative population predictions. The network was trained
for 20 epochs with a batch size 8 and a learning rate of 10-4 , using AdamW as optimizer. Image augmentations in the form of flips
and rotations were incorporated into training.
The preliminary results are promising (Figure 1). The DL models are able to accurately predict population counts with high accuracy
both at the grid and administrative census level (coefficient of determination 0,84 and 0,80, respectively). In the final version we
will present a thorough error analysis and maps unraveling the potential of this products near-real time population mapping.
References
Dobson, J. E., Bright, E. A., Coleman, P. R., Durfee, R. C., Wor-
ley, B. A., 2000. LandScan: a global population database for
estimating populations at risk. Photogrammetric engineering
and remote sensing, 66(7), 849–857.
Freire, S., Halkia, M., 2014. Ghsl application in europe: Towards
new population grids. European Forum For Geography And
Statistics, Krakow, Poland.
Grippa, T., 2018. Dakar population estimates at 100x100m spatial
resolution - grid layer - Dasymetric mapping.
He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning
for image recognition. Proceedings of the IEEE conference on
computer vision and pattern recognition, 770–778.
Tatem, A. J., 2017. WorldPop, open data for spatial demography.
Scientific data, 4(1), 1–4.
Thomson, D. R., Gaughan, A. E., Stevens, F. R., Yetman, G.,
Elias, P., Chen, R., 2021. Evaluating the Accuracy of Gridded
Population Estimates in Slums: A Case Study in Nigeria and
Kenya. Urban Science, 5(2), 48.
As the climate crisis becomes unignorable, it is imperative that new services are developed addressing not only the needs of customers but take into account its impact on the environment. Society is currently experiencing a green transition, which is revolutionizing business models, technology innovation and use, consumption and offering of applications, and sharing of knowledge involving both human and machine spheres.
The Telecommunication and Integrated Application (TIA) Directorate of ESA wishes to support the green transition through the Green Value and Sustainable Mobility (GVSM) Initiative.
Green Value is defined as low carbon, resource efficient and socially inclusive economy, pursuing knowledge and practices that can lead to more environmentally friendly and ecologically responsible decisions and lifestyles. This will help protect the environment and sustain its natural resources for current and future generations. Topics covered in the Green Value, include but not limited to: food production, energy transition, urban sustainability.
Sustainable Mobility, which is an unavoidable element of the green transition, refers to the broad subject of mobility that is sustainable in the sense of social, environmental and climate impacts as well as being economically. It includes the intelligent use of energy, digital technology and transport infrastructure for Land, Air, Sea and Rail transport. Sustainable mobility ensures that the transports systems meet society’s economic and social needs whilst minimising the impacts on the environment.
Each of the topics covered in the GVSM initiative, and their diverse markets, entail both commercial opportunities and technical challenges. The integration of innovative space and non-space digital technologies and infrastructures are required to optimally develop and deploy commercially sustainable solutions addressing the diversity of use cases. Satellite connectivity, including future 5G networks, and digital technologies such as Digital Twin, AI, Machine Learning and cloud-based applications are key enablers of the green transition and contribute to SDGs.
GVSM main objectives are:
• Support the emergence of green services leading to decarbonisation of the major Green House Gas (GHG) generating sectors (e.g. transport, energy, industry), establishing Space as part of a green ecosystem of users,
• Coordinate new public-private partnership projects (PPPs), developments (technology, products, systems, and applications) and deployment of solutions addressing EU Green Deal areas.
• Demonstrate the benefits of connectivity infrastructure (small-sat constellations, IoT, optical communication) as enabler of sustainable green services.
• Through demonstration and validation opportunities prove that space-based solutions can deliver innovative space-powered business propositions addressing the climate and environmental challenges, thus paving the way towards the deployment of operational systems.
• Assess the environmental “green” impact of developed systems, technologies and applications according to different indicators, including CO2 reduction.
GVSM activities will contribute to the objective of the ESA Agenda 2025 “Make Space for Europe”, and especially to the “Space for Green Future” accelerator, bringing forward the contribution that connectivity and integrated applications can make to support all sectors of the Green economy, while also stimulating and accelerating the growth of a competitive European Downstream and Upstream Industry.
The paper will describe how services which leverage on connectivity, space and digital technologies, covered in the GVSM framework, are pivotal for the decarbonisation and for delivering SDGs, such as clean water, affordable and clean energy and sustainable cities and communities. The ESA Business Applications Space Solutions (BASS) programme has already supported sustainable development-80 MEUR invested by Industry and National Delegations in green business applications. Additionally, other PPP initiatives have been pursued. The Iris Programme has set the initial steps towards reducing the environmental footprint of commercial aviation. Thanks to the implementation of 4D trajectory systems leveraging on satellite connectivity, a reduction of CO2 emissions of 10 tons/year is achievable (SDG #11 sustainable transport).
Land use reflects the needs and haves of societies, and land-use change (LUC) is the main manifestation of human-environment interactions. LUC is thus at the heart of many sustainable-development challenges globally, either with direct (zero hunger, climate action and life on land) or indirect influence (no poverty, good health and well-being, clean water, economic growth, sustainable habitation and peace and justice). Detailed spatiotemporal information on different socioeconomic and environmental facets of land use is needed to support monitoring the trajectories of land systems, for scenario modelling, and for various other applications in science, policy, and management. Given the inherent complexity of land systems, these applications put extensive requirements on LUC data in terms of their consistency, spatiotemporal and thematic scope and detail, quality-assurance, and fitness-for-purpose. These requirements are not met by existing LUC data products.
While more extensive and higher-quality LUC data are generally needed, distinct user-groups have specific data needs. For example, climate modelers may only require LUC data at moderate spatial resolutions but need the gridded numbers to add up to national FAO accounts to ensure interoperability with other global models, while on-the-ground interventions or (sub-)national-scale decision making depends more critically on spatially accurate LUC information at the finest-possible resolutions. Moreover, agro-ecological models need to rely on accurate crop suitability, while theory-building in land-system science needs LUC data with minimal built-in assumptions.
We will present a global land-use timeseries that is based on a modelling pipeline that addresses at its core consistency issues, includes extensive quality documentation and is build in a modular fashion to tailor output that is adapted to different downstream requirements. The dataset we will present is based on state-of-the-art remotely sensed information (integrating ESA CCI landcover with many other datasets), a vast database of harmonised national and sub-national agricultural census statistics, and millions of in-situ records of land-use observations collected from hundreds of individual sources, used both to determine suitability of the Earth’s land surface for different land-use classes and commodities and to enable rigorous validation of the final spatial patterns. All these information are used optimally according to their individual strengths to ensure a high degree of spatiotemporal and thematic consistency and the quality documentation allows downstream applications to make informed decisions about adequate use. The resulting global data products use a hierarchical classification scheme of land-use concepts that considers the complete terrestrial surface for allocation of all land-use classes, enabling full thematic completeness in downstream applications. The modelling pipeline is, moreover, build in a modular fashion that allows specification of model-runs that are adapted to specific downstream needs by simply changing input data and parameters, as well as enabling continuous updating as different/improved versions of input data become available.
We have developed these data prodcuts within the LUCKINet, which is an international collaborative network with a shared vision of integrating LUC knowledge and providing fit-for-purpose data products to multiple applications related to the SDGs. We envision a ‘socio-technological infrastructure’ of open-source tools and a growing number of contributors that build on our initial contribution to collectively further improve and apply this LUC information to help advance sustainable development.
One of the challenges for quantifying the pace of urban land use/land cover changes in rapidly urbanizing cities of Sub-Saharan Africa is the demarcation of real urban boundary. Furthermore, collected statistics are most of the time outdated or aggregated to large heterogeneous administrative entities, which are judged meaningless for assessing urban development pace and trajectory. To assess the Sustainable Development Goal 11, there is a need for timely and reliable data and tools to accurately monitor spatio-temporal patterns of urbanization and analyze land use consumption. Satellite based monitoring was deemed as a vital tool for regular monitoring for the changing urban environment. Advanced machine learning and Earth observation big data analytics are potential for accurately detecting and extracting urban areas. In this study, we developed a method for delineating built-up areas in Kigali, Rwanda using a U-Net based impervious surface extracted from multi-temporal Sentinel-2 imagery. We further analyzed the spatio-temporal land consumption in the last five years since 2016 using population statistics and newly delineated urban areas. The proposed methodology enhanced the extraction of real urbanized areas, which were previously aggregated to the boundary of large administrative entities. Since 2016, change in landscape spatial pattern was characterized by high land consumption rate mainly in Southern and Eastern parts of Kigali. Our results illustrate that urbanization scenario was characterized by infill, extension and leapfrogging. The framework proposed in the present study can be easily transferred to other Sub-Saharan Africa cities.
Key words: Sentinel-2 MSI, LULC classification, impervious surface, Land consumption, Kigali, Rwanda
The exponential growth of Earth Observation (EO) data provides an increasing number of opportunities to monitor climate-driven natural hazards which have a disproportionate impact in low-resource settings. Though the creation of analysis-ready data (ARD) has also proliferated, its use and adoption by governments in low-income countries and humanitarian organizations remains low. A key driver of this is lack of access to ARD in systems they have direct influence over to meet their needs. The World Food Programme’s (WFP) Climate and Earth Observation unit has developed a suite of tools to address including a forthcoming Open Data Cube deployment, and an open-source data visualization and analysis platform called PRISM.
PRISM is designed to improve utilization of the wealth of data available but not fully accessible to decision makers particularly in low-resource environments. This is especially true of Earth Observation data which typically requires specialized skills and technology infrastructure to make it useful for practitioners. PRISM is open-source software which has been developed by WFP since 2016 but with a major technology overhaul in 2020. Though the project is led by WFP, as open-source software it is open for collaboration and use by anyone.
The objectives of PRISM are to provide greater access to data on hazards, particularly those generated from Earth observation data; to bring together various components of risk and impact analysis in a single system; to complement data from remote sensing with field data; and to provide tools to governments and local partners that foster local ownership and utilization of data for decision-making particularly related to disaster risk reduction and climate-resilience. PRISM simplifies the integration of geospatial data on hazards such as droughts, floods, tropical storms, and earthquakes, along with information on socioeconomic vulnerability. It is provided to governments and humanitarian agencies as a free solution which can be easily adapted to local needs. PRISM combines data from these various sources to rapidly present decision makers with actionable information on vulnerable populations exposed to hazards, allowing them to prioritize assistance to those most in need.
With these objectives, and as a form of technical assistance to governments in low and middle income countries, PRISM contributes to SDGs 1 – No poverty, 2 – Zero hunger, 11 – Sustainable cities and communities, 13 – Climate action, and 17 – Partnerships for the goals. The platform facilitates climate risk monitoring and helps to focus attention on the most vulnerable populations. This geographic targeting is used by governments and humanitarian agencies to protect people living in poverty, and to prevent those living just above the poverty line from falling below poverty due to a climate-driven disaster, contributing to target 1.5 (build the resilience of the poor and those in vulnerable situations and reduce their exposure and vulnerability to climate-related extreme events and other economic, social and environmental shocks and disasters). PRISM has broad relevance for SDG 2. Extreme weather and climate change not only increase the risk of food insecurity among affected farmers, but also the broader food system. Droughts in particular can severely impact the production of key staple commodities, driving up food prices and contributing to food insecurity. As a monitoring system, PRISM provides insights into the extent and severity of these hazards as a tool for decision makers to reduce food insecurity.
Within SDG 11, target 11.5 aims to reduce economic losses from disasters, with a focus on protecting the poor in vulnerable situations. As a tool for geographic targeting based on vulnerability and hazard exposure, PRISM is used for disaster risk reduction by governments and partners to assist those most in need with adaptive social protection programs and early actions. Within SDG 13, target 13.1 seeks to strengthen resilience and adaptive capacity to climate-related hazards. PRISM is deployed as a form of technical assistance and capacity development in countries highly exposed to climate hazards, offering a platform to monitor extreme weather and implement well-targeted disaster risk reduction activities. This focus on capacity-building is a key strategic element of PRISM deployments where WFP’s country offices support national plans to achieve the SDGs, technical assistance and facilitation of South-South cooperation – contributing to SDG 17.
Configuration of the PRISM dashboard requires no coding experience - minimizing the need for niche software development and ITC infrastructure skills to support the application. The dashboard is built on common modern frameworks for web software development. It uses geospatial standards set through the Open Geospatial Consortium (OGC) to maximize interoperability with other systems and to ensure its longevity.
As PRISM requires external data as part of the deployment process, it is closely related to the forthcoming deployment of WFP’s global instance of the Open Data Cube platform. WFP’s Open Data Cube deployment provides climate monitoring data across more than 80 countries globally and is easily integrated into PRISM deployments. PRISM has also been configured to integrate data from other Open Data Cube deployments – providing a quick tool to display time-series raster data in an interactive dashboard. PRISM also integrates data from WFP’s related system – ADAM (Automatic Disaster Analysis & Mapping) which provides near real-time data on earthquakes, tropical storms, and soon floods.
PRISM follows the Intergovernmental Panel on Climate Change (IPCC) disaster framework, where risk and impact are the intersection of hazard, exposure, and vulnerability. As such PRISM, can support decision-making at the national level at various stages of the disaster management cycle, notably - preparedness and response at the sub-national level.
As PRISM facilitates the use of geospatial data over time, it can highlight areas and populations repeatedly exposed to hazards. In addition, hazard frequency products generated through various processes can also be easily integrated into PRISM for additional analysis. These are used by WFP and partners to highlight areas with repeated exposure to hazards to concentrate preparedness activities.
Recently, the project has completed integration of data collected on mobile devices using KoBo Toolbox – a free and open-source field data collection tool developed by the Harvard Humanitarian Initiative with wide adoption across the humanitarian and development sectors. This integration allows data collected in the field to be visualized alongside PRISM’s other data sources in real-time.
When a disaster is unfolding, PRISM provides information on the geographic extent and severity of a hazard from satellite products. By combining the birds-eye view provided from satellite imagery with data collected from the field, PRISM provides real-time information can rapidly inform response activities.
While PRISM has thus far focused on disaster risk reduction, the platform can be applied to multiple use cases where analysis ready EO data is available but not yet in the hands of national institutions. PRISM fills an important gap in achieving the SDGs by enhancing national systems with EO data and providing a clear path to local ownership so that countries can leverage this data for more informed decision making contributing to multiple SDGs.
The 2030 Agenda for Sustainable Development, subscribed by all the United Nations Member States in 2015, provides a shared blueprint for peace and prosperity for people and the planet. The agenda lists 17 goals, the Sustainable Development Goals (SDGs), which state a path to be followed by all the countries within 2030 for the global development. Earth orbiting satellites and especially Low Earth Orbit (LEO) satellites lie in a privileged location to monitor our planet. This allows Earth Observation (EO) missions to contribute to the achievement of the SDGs, as extensively recognised by both space agencies and the UN.
In this paper a new methodology is presented to provide agencies, governments, and stakeholders a
tool to assess the societal benefits of EO missions. The aim of the proposed approach is to quantify the social value rating of the missions through the achievement of the SDGs. For this purpose, nine Services provided to Earth by EO missions are identified: Built-up land (i.e. all kinds of man-made constructions), Agriculture, Wild nature, Geology, Limnology, Oceanography, Meteorology, Air Quality
Monitoring and Hazards Monitoring. The evaluation of the social benefits is carried out introducing four indices relating satellite payloads to these Services, which are linked to the SDGs.
The four indices focus on the payload’s temporal resolution, spatial resolution, spectral efficiency and Earth coverage.
The proposed model is applied to the Copernicus program, in order to assess its contribution to the achievement of the SDG2030.
The ARICA project is about “a multi-directional Analysis of Refugee/IDP (Internally Displaced Persons) CAmp areas based on HR/VHR satellite data” with the aim to better understand the mutual influence between the environment and refugee/IDP camp inhabitants. The overall goal is to investigate how satellite data could support the management of such camps during their whole life cycle to improve and secure living condition, as well as reduce environmental impact. Four large camps in Africa, Asia and the Middle East have been observed with radar and/or optical satellite time-series from Sentinel-1 (S1) and Sentinel-2 (S2): (1) the Mtendeli Refugee Camp in Tanzania (see Figure) that opened in 2016 and is planned to be closed in the near future, (2) the IFO-2 camp in Kenia that was closed in May 2018, (3) the Khanke IDP camp in Iraq that opened in 2014 and (4) the currently World’s largest Kutupalong Refugee camp in Bangladesh hosting more than 600,000 Rohingyas that fled Myanmar and especially since 2017. S1 and S2 time series have been used to map land cover and land cover change in the surroundings of the camps, indicating forest loss during and since the installation of the camp, changes in agricultural areas, as well as revegetation after closure. Such forest observations can be compared with available products from the Global Forest Change program and put into the historic context. The area and evolution in size of the camp can be estimated and combined with single dwelling observation from very-high-resolution satellite data. Natural hazards like floods, landslides and drought can potentially be observed and mapped for coordinating emergency measures. The satellite observations are combined and associated with information collected through interviews with camp residents and stakeholder like NGOs, UNHCR, etc. The mutual relation of refugee/IDPs settlements and natural environment will be highlighted in a socio-geographical analysis of the project, resulting in the determination of the most important factor of the camp inhabitant's activity which are the drivers behind the environmental changes observed by satellite. Results of the ARICA project will be made available through a dedicated open geo-platform. The presentation will give an overview of the current state of the ARICA project and present preliminary results.
The United Nations had proclaimed 2015 as the International Year of Soil, thus emphatically emphasizing the importance of soil protection and its sustainable management as the basis for food security, safeguarding ecosystem functions and sustainable climate protection worldwide. The Agenda 2030 Sustainable Development Goal (SDG) 15 underlines the urgent need to “protect, restore and promote sustainable use of terrestrial ecosystems, sustainably manage forests, combat desertification, and halt and reverse land degradation and halt biodiversity loss” (United Nations General Assembly, 2015). In the National Sustainability Development Strategy (Bundesregierung, 2016), the German government has explicitly listed the goal of “halting soil degradation" and has included the "protection and sustainable use of soil as a resource" in its catalogue of measures (SGD 15/I.3). However, despite the vital functions and central importance of soil, it is estimated that around 24 billion tons of fertile soil are lost every year due to improper use (Soil Atlas, 2015). In Germany, it is about 56 hectares of soil every day whose soil functions are completely or partially damaged (Statistisches Bundesamt (Destatis), 2019). According to the European Environment Agency (EEA) and the Commission, soil erosion, land use by settlement and transport, and material pollution are the main problems for the loss of soil and the impairment of soil functions (e.g., Panagos et al., 2015).
At present, data on land use and development is provided on the basis of the Agricultural Statistics Act (AgrStatG). The smallest survey unit is the municipality with a regular survey frequency of four years (excluding settlement and transport areas). A precise spatial location and assessment of soil or soil function losses with regard to its most important function today, food production, is currently not available; the loss of valuable soil in terms of fertility and yield capacity cannot yet be quantified and thus cannot be controlled.
Earth observation missions such as Landsat or the Copernicus Sentinels can provide information on the condition and properties of soils, and on the type, the intensity and the development of land use, nationwide and with high spatial resolution (Rogge, et al., 2018; Preidl, et al., 2020). Together with existing geodata (e.g., terrain models, soil maps, climate and weather data) this opens up new opportunities for a spatially explicit recording and evaluation of soil loss in support of a sustainable development.
Facing this background, the SOIL-DE project aims to provide improved nation-wide indicators on the functionality, the yield capacity, the land use intensity and the vulnerability of agricultural soils. To accomplish that challenging task, historical satellite data from the LANDSAT archive (1984-2014), Sentinel-2 satellite data from European Copernicus Program, and the European LUCAS soil data base were explored in order to derive information on soil parameters such as soil organic carbon on the one hand (Zepp et al., 2021). On the other hand, a set of six functions and potentials of landscape’s ecosystem capacity were selected and derived according to Marks et al. (1992). These include the biotic yield potential, the erosion resistance function to water and wind, the flow regulation function, and the physical-chemical and mechanical filter function. Further, the Muencheberg Soil Quality Rating (Mueller et al., 2007) was applied. Functions and potentials were parameterized using official soil data from the German Soil Survey 1:200.000 (BÜK200), remote sensing data products on land use and land use intensity, digital elevation model, and climatic data.
Currently, a framework is set up to combine these indicators to a comprehensive high-resolution soil quality index of German soils under agriculture. On that basis, and for the first time, soil loss may be evaluated quantitatively and qualitatively. This will be achieved by using remote sensing-based information on land cover change (e.g., Corine Land Cover (CLC) change).
All data layers and products are made freely available to authorities, planners, and the public via Webservices in the SOIL-DE Viewer. Its flexible layout automatically adapts to different devices including personal computer, tablets or smartphones.
References:
Bundesregierung (2016) Deutsche Nachhaltigkeitsstrategie, 256 S.[online] https://www.bundesregierung.de/ Webs/Breg/DE/Themen/Nachhaltigkeitsstrategie/1-die-deutsche-nachhaltigkeitsstrategie/nachhaltigkeitsstrategie /node.html, zitiert am 21.03.2017.
Marks, R., Müller, M., Leser, H., Klink H.-J., 1992. Anleitung zur Bewertung des Leistungsvermögens des Landschaftshaushaltes (BA LVL). Zentralauschuss für deutsche Landeskunde, Selbstverlag, Trier.
Panagos, P., Borrelli, P., Poesen, J., Ballabio, C., Lugato, E., Meusburger, K., Montanarella, L., Alewell, C. (2015) The new assessment of soil loss by water erosion in Europe, Environmental Science & Policy, 54, 438-447, ISSN 1462-9011, https://doi.org/10.1016/j.envsci.2015.08.012.
Preidl, S., Lange, M., Doktor, D. (2020) Introducing APiC for regionalised land cover mapping on the national scale using Sentinel-2A imagery, Remote Sensing of Environment, Volume 240, Article 111673, DOI: 10.1016/j.rse.2020.111673
Rogge, D. Bauer, A., Zeidler, J., Mueller, A., Esch, T., Heiden, U. (2018) Building an exposed soil composite processor (SCMaP) for mapping spatial and temporal characteristics of soils with Landsat imagery (1984–2014), Remote Sensing of Environment, 205, 1-17, ISSN 0034-4257, https://doi.org/10.1016/j.rse.2017.11.004.
Statistisches Bundesamt (Destatis), 2019. Bodenfläche nach Art der tatsächlichen Nutzung. Fachserie 3 Reihe 5.1
United Nations General Assembly, 2015. Resolution adopted by the General Assembly on 25 September 2015: 70/1. Transforming our world: the 2030 Agenda for Sustainable Development. Seventieth session, Agenda items 15 and 116, A/RES/70/1.
Zepp, S., Heiden, U., Bachmann, M., Wiesmeier, M., Steininger, M., van Wesemael, B. (2021) Estimation of soil organic carbon contents in croplands of Bavaria from SCMaP soil reflectance composites. Remote Sensing 13 (16), 3141, 1–25 (ISSN 2072-4292)
COVID-19 disrupted supply chains throughout the agricultural industry, impacting production and harvest, policy, markets and trade, shipping infrastructure, researchers’ ability to conduct fieldwork and meet with farmers, and more. These supply chain disruptions directly affect the world’s ability to address the Sustainable Development Goal 2: End Hunger, as food availability can be viewed in part as a global logistics and policy issue rather than solely purely a food production issue. Recognizing the unique challenges brought on by the pandemic and the capacity of Earth observations data to fill unanticipated knowledge gaps – particularly those related to food supply and field-specific observations – NASA identified several projects to leverage existing resources to improve the availability of relevant data.
This presentation introduces two such initiatives taken under the NASA Harvest Program for food security and agriculture. First, the NASA Harvest COVID-19 Dashboard for Agriculture was developed under NASA’s Rapid Response element to provide access to and visualization capabilities for data related to the spread of COVID-19, crop conditions and production, food security, macroeconomic variables, and markets and trade. The novel collection of these datasets includes COVID-19 case counts and vaccination rates, inter-country travel restrictions, GEOGLAM Crop Monitor crop condition reports, vegetation indices, food security indices, historical trade indices, and remotely sensed weather variables.
Efforts taken under the NASA Harvest COVID-19 Dashboard contributed in part to a new interdisciplinary research initiative, “Agricultural Supply Chains and Food Security in the COVID-19 World,” focused on providing timely, operational insight into food supply chains and the food system as a whole. Especially relevant is the integration of multiple sources of agricultural supply chains data from a geospatial perspective that includes geographic information systems (GIS), remote sensing, and economic modeling. Under this program, novel economic indicators based on new commercial datasets have been designed and combined with satellite data to inform global economic and food crisis models that highlight possible agricultural production and consumption trajectories. The context for these datasets is being defined by developing tools that enable the visualization of semantic webs and knowledge graphs, which are augmented via natural language processing workflows.
These programs aim to increase the operational preparedness of the agricultural monitoring community by aggregating all of the available relevant data in one location. Data for both systems are made available through the NASA Harvest Portal for data discovery and download, as well as programmatically via RESTful API’s that provide access to the data as geospatial image services and in other common data formats.
Inland excess water (IEW) is a type of flood where large flat areas are covered with water during a period of several weeks to months. In areas with limited runoff, infiltration and evaporation the superfluous water remains on the surface. Local environmental factors, like agricultural practices, relative relief differences and soil characteristics, play an important role in the development of IEW, which can cause severe water management problems but also provide opportunities to reduce water scarcity. In Hungary, on average, every year 110 000 ha of land is covered with IEW, but much larger inundations have also been reported (e.g., 445 000 ha in 1999 and 355 000 in 2011), resulting in serious financial, environmental and social problems and costs. One of the potential integrated and sustainable solutions to the inland excess water problem is to store the surplus water in agricultural areas for later periods of drought or to allow the water to remain on areas designated as (temporary) wetlands, supporting ecosystem restoration. For such complex water management, it is important to understand where IEW develops. Before it is possible to take action, it is necessary to understand the phenomenon and identify the factors and processes that cause the formation of inland excess water. Also, it is necessary to determine the location and size of the inundations to be able to plan storage possibilities or take operative measures to mitigate and prevent damage. When the locations and duration are monitored continuously, it may be possible to forecast the locations, size and duration of IEW in the future and to develop preventive policies or determine how the surplus water can be used sustainably.
Four major approaches to map and monitor IEW can be identified. (1) The oldest approach is visual observation of inland excess water patches. This is labour intensive and can easily lead to errors due to misinterpretation and differences in observation methodology. (2) Aggregating the field observation maps over time can be useful to create maps showing the vulnerability to inland excess water floods. This approach is useful to identify hazardous areas but cannot be used for operation intervention. (3) Modelling of inland excess water has been performed using hydrological modelling packages as well, but it requires large amounts of accurate input data and extended computational power. It can usually not be performed on large areas. (4) Mapping and monitoring IEW based on remote sensing data and algorithms. Satellite based monitoring provides the opportunity to detect IEW over large areas with high temporal and spatial resolution, and to standardize and automate the analysis.
This research presents and validates a new methodology to determine the extent of the floods using a combination of passive and active remote sensing data. The method can be used to monitor IEW over large areas in a fully automated way based on freely available Sentinel-1 and Sentinel-2 remote sensing imagery. Currently, we determine the extent of IEW for 12 adjacent Sentinel-2 tiles in Hungary on a weekly basis. The large number of Sentinel-1 and Sentinel-2 satellite images and their derived IEW maps require a large amount of disk space. The processing of the images to IEW maps has been automated and is performed on a daily basis to reduce the demand for resources. To further prevent unnecessary data and calculation resources, we only calculate the maps during the IEW period, which is usually from February to April, although due to increased rainfall in 2021 also for May and June IEW maps were calculated.
Our method was validated during different IEW periods using very high-resolution optical satellite data and aerial photographs. Compared to earlier remote sensing data-based methods, our method can be applied under unfavourable weather conditions, does not need human interaction and gives accurate results for inundations larger than 1000 m2. The overall accuracy of the classification exceeds 90%; however, smaller IEW patches are underestimated due to the spatial resolution of the input data.
The continuous monitoring of the inundations results in a large number of maps showing where and how IEW develops. The individual maps can be combined to frequency maps to support sustainable water management by revealing drier and wetter regions in the area. This can help to plan storage areas for surplus water and reduce water scarcity.
It is estimated that over 445 thermal power stations worldwide use sea water for colling, and over 1447 use freshwater from rivers or lakes (Vliet et al. 2016). Water is abstracted from one location and after being heated by the condensers, it is returned to a different location to avoid recirculation. For a large nuclear or fossil fuel power station this can correspond to over 100 m3s-1 of water at a temperature 11°C above the intake temperature. The colling water plume impacts the aquatic environment through the heat content, raising metabolism and biological stress, and through the by-products of chlorination and other chemicals used to control biofouling. With warming sea temperature due to climate change so does the output temperature increase, which raises its environmental impact and can breach national regulations. But monitoring of the environmental impact is costly due to temporal and spatial variation of the plume, in particular in tidal estuaries or coasts.
In this study we demonstrate and validate the use of LANDSAT 8 TIRS to observe and monitor power stations discharges around the UK for two nuclear power stations with Gigawatt capacity. By building a time series of plume location and intensity since 2013 it is possible to characterise and monitor changes to the impacted areas, in particular for intertidal mudflats which can have high environmental value. Two atmospheric correction methods were tested and validated against in situ observations from the UK wavenet network of surface buoys: split-window and a single-band radiative transfer correction. The split-window method (Du et al., 2015) does not require the knowledge of the water vapour content in the atmosphere from a nearby weather station or from an atmospheric model (e.g. Rozenstein et al., 2014) and rellies relies solely on the image data. The single-band method uses Band 10 to avoid the partially corrected stray light problems that affect Band 11 more acutely (Gerace and Montanaro 2017). It relies on an optical description of the atmosphere from combining the NCEP atmospheric model and MODTRANS radiative transfer model (Barsi et al. 2014). Validation for 5 locations along the Bristol Channel yielded RMSE errors of 0.55°C for split-window and 0.61°C for the radiative transfer method making both suitable for environmental monitoring as well as providing information to power stations on the occurrence of recirculation, which has important economic and operational impact on energy producers. The major limitation of this method is the 15 days revisit time of the platform, but in sight of both commercial and national agencies plans to launch high resolution thermal sensors this is shown as a cost-effective solution to environmental monitoring at a time when marine discharges will have increased impacts.
References:
Barsi, J., Schott, J., Hook, S., Raqueno, N., Markham, B., Radocinski, R., 2014. Landsat-8 Thermal Infrared Sensor (TIRS) Vicarious Radiometric Calibration. Remote Sens. 6, 11607–11626. https://doi.org/10.3390/rs61111607
Du, C., Ren, H., Qin, Q., Meng, J., & Zhao, S. (2015). A practical split-window algorithm for estimating land surface temperature from Landsat 8 data. Remote Sensing, v. 7, n. 1, p. 647–665. DOI:10.3390/rs70100647
Gerace, A., Montanaro, M., 2017. Derivation and validation of the stray light correction algorithm for the thermal infrared sensor onboard Landsat 8. Remote Sens. Environ. 191, 246–257. https://doi.org/10.1016/j.rse.2017.01.029
Ren, H.; Du, C.; Liu, R.; Qin, Q.; Yan, G.; Li, Z.; Meng, J. (2015). Atmospheric water vapor retrieval from Landsat 8 thermal infrared images. Journal of Geophysical Research: Atmospheres, n. 120, p. 1723–1738, DOI.org/10.1002/2014JD022619
Rozenstein, O.; Qin, Z.; Derimian, Y.; Karnieli, A. (2014). Derivation of land surface temperature for landsat-8 TIRS using a split window algorithm. Sensors, v. 14, n. 4, p. 5768–5780. DOI.org/10.3390/s140405768
van Vliet, M., Wiberg, D., Leduc, S. et al. Power-generation system vulnerability and adaptation to changes in climate and water resources. Nature Clim Change 6, 375–380 (2016). https://doi.org/10.1038/nclimate2903
Water management associations, suppliers and municipalities face new challenges due to impacts of climate changes and the ongoing intensification of agriculture effect in increased material inputs in watercourses and dams. As an important task there is also the prediction of changes of the water quality and other hydrological aspects. On the other side due to the evolvement of the Copernicus satellite platforms, the broader availability of satellite data provides a great potential for deriving valuable, complementary information from Earth Observation data, that contributes to a detailed understanding of hydrological processes. Although the number of satellite data platforms that provide online processing environments is growing, it is still a big challenge to integrate those platforms into traditional workflows of users from environmental domains such as hydrology. EFTAS had the opportunity to participate in two R&D-projects in this field which were finished within the past 12 month: WaCoDiS and MuDak -WRM. Although both projects worked in the field of water management and the question how remote sensing can facilitate the different tasks, the projects focused on different aspects. The project WaCoDiS focused on the tasks of a particular water management association (Wupperverband) and their continuous tasks and the question how remote sensing methods are able to facilitate these tasks with the aim to reduce costs or improve the quality of the results. On the other side the project MuDak -WRM focused on the question to develop a model as simple as possible for predicting mid- to long-term changes in the water quality of reservoirs and connect them with remote sensing data. Where in the project WaCoDiS the aim was to focus on the special region of the water management association, in MuDak -WRM the aim was to develop methods and models which are transferable to all regions worldwide. During the projects different remote sensing datasets and a processing platform were created, mostly based on Sentinel 1 and Sentinel-2 images, tailored to the hydrological needs. We found out that it was possible to facilitate and improve the different water management tasks. In the presentation we will show the particular tasks and our solutions to these tasks together with an overview of the advantages and disadvantages of them.
Across the globe, as one of the repercussions of climate change and global warming, several new glacial lakes have formed in the previously glaciated areas. In addition, the area of many existing glacial lakes is on the rise. Prior research showed that rapid deglaciation and lake formation have dramatic effects on downstream ecosystem services, hydropower production and high-alpine hazard assessments. However, this extraordinary environmental change is currently only a side note in the perception of climate change impacts, second, for example, to the widely discussed loss of glaciers and permafrost. Glacier lake inventories are increasingly becoming available for high-alpine areas and Greenland, but it is essential to map and monitor the changes in water extent in these lakes at a higher frequency for hazard assessment and Glacial Lake Outburst Flood (GLOF) risk estimation.
There are several underlying challenges to perform mapping and monitoring of the glacial lakes from space using optical and Synthetic Aperture Radar (SAR) satellite sensors. Most of these lakes are very small in area, and frozen for a large part of the year, making the mapping using satellite sensors challenging. Additionally, observing such lakes using optical satellite imagery such as Sentinel-2 becomes challenging due to the inability of the sensor to penetrate clouds. Moreover, cast and cloud shadows, increasing lake and atmospheric turbidity pose further hurdles that need to be tackled. On the other hand, for monitoring using SAR satellite sensors (e.g. Sentinel-1 SAR), handling natural variations in backscattering from water surfaces and cast shadows are the main difficulties. To overcome the above-mentioned challenges, we propose to fuse the complementary information from optical and SAR imagery using a deep learning approach (with Convolutional Neural Network backbone). The input data include: Sentinel-2 L2A and Sentinel-1 SAR satellite imagery. The aim is to perform a decision-level fusion of information from the two input heterogeneous satellite data targeting to leverage the advantages of both sensors by relying on a data-driven, bottom-up methodology. Our target is to produce geolocated maps of the target regions where the proposed deep learning methodology classifies each pixel either as lake or background.
This work is part of two major projects: ESA AlpGlacier project that targets mapping and monitoring of the glacial lakes in the Swiss (and European) Alps, and the UNESCO (Adaptation Fund) GLOFCA project aiming to reduce the vulnerabilities of populations in the Central Asia (Kazakhstan, Tajikistan, Uzbekistan, Kyrgyzstan) region from GLOFs in a changing climate. Various regions in Central Asia find it challenging to cope with the drastic effects of climate change, especially the impacts on water-related disasters. Prior research (2009) by the World Bank concluded that Tajikistan and Uzbekistan are highly sensitive to climate change in the entire Central Asian region. Socially and economically underprivileged, indigenous populations, ethnic groups, women, children and elderly are especially vulnerable to the impacts of global warming, as adaptation and disaster risk management capacities are typically low in these regions. One of the major outcomes of climate change in Central Asia is melting of the glaciers which trigger the formation of new glacial lakes. As part of the GLOFCA project, we aim to develop a toolbox for mapping and monitoring the glacial lakes in the target regions in Central Asia.
The present study focusses on the improvement of crisis preparedness for fragile states community resilience through the provisioning of an early warning decision support system (DSS), at pre-operational level, able to provide, in a timely manner, critical geointelligence information enhanced through the combination of complementary sources of information.
The recent new capabilities introduced in Earth Observation concerning spatial and, above all, temporal resolutions have dramatically enhanced the range of useful applications addressable with spaceborne sensors, opening the stage to new solutions and opportunities not even thinkable just few years ago. In addition to this, the availability of new technologies improving the management and exploitation of large volume of datasets allowed the design and development of automatic information extraction pipelines allowing the definition of new geospatial indicators and “signals” improving situational awareness, area evolution and human ground-activities monitoring.
Finally, the combination of non-EO data plays an essential role in the final products enrichment (e.g. integration of context related details) and, above all, advanced exploitation (e.g. triggering satellites data collection or focusing properly remotely sensed data analysis): social network, news and media feeds, Country Profiles providing political, social and economic context.
The proposed study refers to the monitoring of the Grand Ethiopian Renaissance Dam (GERD) basin filling and it aims at showing how EO-based products combined with non-EO data can provide key indicators and early warning concerning both on-going activities monitoring, future evolution forecast and potential impact on the socio-economic stability of the involved Countries.
The dam is located in Ethiopia’s Benishangul-Gumuz region, 45 km east of the border with Sudan and sits on the Blue Nile, the main supplier of the Nile River (up to 86%). The potential consequences on area instability of such controversial construction project started in 2011 are quite straightforward:
- Sudan and Egypt fear that the GERD will reduce the amount of water available to them from the Nile and therefore introduce potential effects on countries agricultural production (both due to the water amount reduction and, in Egypt area, the consequent rise in sea level which will increase salinity and studies preliminary estimated a loss of up to 15% of arable land in the delta area).
- The DAM is designed to generate about 6,000 MW of which up to 5,000 MW are planned to be exported to other African states within 10 years thus introducing a noticeable change on the energy export context, and so economy, of Africa.
- The area is already in critical conditions: several conflicts have been reported on the border area within Ethiopia and Sudan as well as the presence of insurgents groups in Ethiopia suspected to be receiving covert supports from neighbouring countries.
The GERD construction started in 2011 and the operational plan is aggressive: fill it in 5-6 years (while Egypt was demanding to perform this in no less than 12 years), start soon the power production (initial reports say around August 2021) and provide, through challenging connectivity targets, energy to the vast amount of rural homes that are currently without supply.
The main concern is therefore on the dam status and evolution (e.g. status, filling rate and the associated operational start of the power production) as well as the associated estimation of potential future impacts on agricultural activities and, above all, involved countries stability.
In the present study the EO-products focus on GERD basin evolution quantitative assessment. As emerged from open-source information analysis outcomes, the key information is represented by the GERD filling rate and current level. Aiming at the estimation of these fundamental parameters, a multi-temporal assessment on GERD filling activities was performed leveraging on Copernicus Sentinel-1 constellation and USGS SRTM-30m digital elevation model data.
As a result, being the 30 meters posting SRTM DEM obtained through data collected before impounding, the refined extent of water extracted from the Copernicus Sentinel-1 amplitude allowed to automatically estimate the basin water height, both absolute and relative w.r.t. terrain level, and volume. Applying this methodology on an extended subset of the full Copernicus Sentinel-1 time-series it was then possible to estimate also the dam basin filling trend.
Taken into consideration the accuracy of source data (Copernicus Sentinel-1 10m spatial resolution (and SRTM 30m accuracy), and coupling such measurements with the above dam operational parameters, it is possible to provide relevant insights to increase situational awareness.
Soil moisture is one of the most important parameter in research on the condition of agricultural land, including meadows. Soil moisture affects almost all its physical and biochemical properties, as well as its microbiological activity.
The aim of this study was to provide information in the form of maps, charts, and a description of changes of soil moisture of agricultural areas, with particular emphasis on meadow areas, for the Masovian Voivodeship (NUTS2). For this purpose, the data of the Copernicus programme was used. Data were collected and compiled for the entire voivodeship and districts. Built-up areas, forest and water areas were excluded from the analysis Surface Soil Moisture (SSM), provided by the LAND service of the Copernicus programme. SSM Model has been performed from Sentinel-1 satellite. It is based on the VV backscattering data. The spatial resolution is 1km x 1km. Moreover, in order to collect precipitation data, the data derived by European Centre for Medium-Range Weather Forecasts (ECMWF) by ERA5 reanalysis model was used.
The analysis of the spatial distribution of moisture content showed that the Masovian Voivodeship is diverse in terms of soil moisture (differences c.a. 20% between counties). The most exposed to the low soil moisture are counties at the western and eastern part of the voivodeship. The highest soil moisture was observed over urban areas. It is supposed that agricultural areas within the urban areas are often meadows, which, as a rule, are distinguished by high soil moisture. The analyses for each month and week showed that 2021 was the year of the highest soil moisture, both in total agricultural areas and in meadow areas. In each district, the average soil moisture was much higher than in the previous years. The year 2019, in turn, was characterized by extremely low values of soil moisture. On the other hand, in meadow areas, soil moisture in some counties was lower in 2020 than in 2019. This was not the case for the total agricultural area, what suggests that water in the meadow areas behaves and stores different.
Rainfall had a significant impact on soil moisture in specific years. In 2021, precipitation was the highest, and in 2019 the lowest, which corresponds to the soil moisture. This impact was particularly evident in April, when the accumulated precipitation in 2019 and 2020 was very low (i.e lower than 5 mm while in 2021 the precipitation was high i.e 50 mm). Therefore, April was the month of the greatest disproportions in the average soil moisture over Masovian voivodeship between 2019-2020 and 2021 (c.a. 25% in 2019, c.a. 20% in 2020 and almost 60% in 2021). In May and June, when accumulated precipitation was similar, soil moisture was similar in each year, (May: c.a. 40% in 2019, c.a. 35% in 2020 and c.a. 50% in 2021; June: c.a. 50% in each year). In August, when the disproportions in precipitation were very high again (c.a. 60 mm in 2019 and 2020, c.a. 160 mm in 2021), soil moisture in 2021 (70%) was much higher than in 2019 (50%) and 2020 (50%). Furthermore, it was noticed that during rainy days and a day after rain days, the soil moisture was unnatural high (close to 90% over arable lands). It is supposed that model, could overestimate the soil moisture. However, relative differences between observations seems to be fitted to actual soil moisture obtained from SSM model. Therefore, model response well to the changes in soil moisture and detects well, the increase and the decrease of this parameter.
The results of the research on soil moisture shows that the model could be applied to restore the natural features of the environment of meadow areas. It is important because grasslands ecosystems play a very important role in the natural environment due to its influence on formation of the micro and macroclimate, regulate the water balance in catchments, and protect the soil against water and wind erosion.
To sum up, significant differences in spatio-temporal distribution of soil moisture over Masovian voivodeship have been observed. 2021 was the year when soil moisture was the highest while in 2019 soil moisture was the lowest. It strictly corresponds with accumulated precipitation. Moreover, the research revealed that the average weekly data, obtained from Copernicus services, for counties seem to be the most useful in monitoring perspective. Thanks to this method, it is possible to detect breakthroughs in the seasonal variability of soil moisture as well as space differences.
The research work was conducted within the project financed by the European Space Agency, titled “Development of Standardized Practices for Monitoring Landscape Values and Natural Capital Assessment (MONiCA)”. The end user of the project is the Mazovian Voivodship Office.
Climate change has greatly altered the occurrence of extreme events such as droughts, floods and wildfires in the past years. Dire consequences of intense drought have been affecting dryland crop yield production. Some of the areas (like the Mediterranean and the Sahel) have been shown to be more prone to climate change and thus droughts and their consequences are expected to exacerbate in the future. Soil moisture (SM) data was shown to be key in the detection of early onset drought. Current drought observation warning systems, such as the European Drought Observatory, the Global Drought Observatory, or the U.S. Drought Monitor offer maps of a combined drought index, derived from different data sources (meteorological and satellite measurements and models). SM anomaly is acknowledged to be a good metric for drought and consequently all the global Drought Observatories include remote sensing (RS) SM but at a low spatial resolution. Consequently, regional drought events are frequently not captured or their intensity is not fully pictured.
In order to detect the onset of crop water stress and to trigger irrigations to mitigate the effects of potential droughts, in situ SM measurements are used by modern irrigators. Unfortunately, they are costly; combined with the fact that they are available only over small areas and that they might not be representative at the field scale, remote sensing is a cost-effective approach for mapping and monitoring extended areas.
This study focuses on a new pilot project which has been implemented over two areas located in the Tarragona province of Catalonia, Spain, whose main aim is to help resilient irrigation practices by offering advice based on drought indices. For this purpose, spatialized drought indices at high (1 km) resolution from remotely sensed SM are derived on a weekly basis. These indices are then used to provide irrigation recommendations to farmers, which have recently switched from dryland crops to vineyards.
Different indices, such as the Palmer Drought Severity Index, the Crop Moisture Index, the Standardized Precipitation Index or the soil moisture deficit index (SMDI) have been developed in literature in order to provide insight on agricultural drought monitoring and forecasting. Most of the existing well-known drought indices have been developed in conjunction with hydrological and meteorological models, i.e., use parameters such as rainfall, evapotranspiration, run-off and other indicators derived from models in order to give a comprehensive picture for decision-making. When used in conjunction with remote sensing-derived parameters, certain artefacts can appear in the drought indices, brought about by the high variability of remotely sensed data in comparison with model data. More specifically, the presence of outliers can have a high impact on the remote-sensing derived drought indices. This study has focused on analysing the presence and the impact of such outliers in the computation of SMDI. High resolution (1 km) SMOS (Soil Moisture and Ocean Salinity) and SMAP (Soil Moisture Active Passive) SM were first derived by using the DISPATCH (DISaggregation based on a Physical and Theoretical scale CHange) methodology. Furthermore, high-resolution root zone soil moisture (RZSM) products were then derived from the 1 km surface SM (SSM) by applying a recursive formulation of an exponential filter. Both SSM and RZSM were consequently used in order to derive SMDI representative of both the surface and root zone layer, on a weekly basis, for a period spanning 2010-2021, for the two areas of the above-mentioned pilot project. In the computation of SMDI for a certain week belonging to a certain month, the historical maximum, minimum and median of the month in question are used. The presence of outliers in the historical maximum and minimum have been identified, after a close inspection of the estimated SMDI using the original definition. The outliers are in line with the nature of the sensor used to measure SM remotely, which naturally is more noisy than the in situ sensors. Therefore, a new strategy has been developed, which uses percentiles in order to compute values corresponding to a “maximum” and a “minimum”, which are not affected by the outliers. Results have shown that by using percentiles instead of directly the maximum and minimum values, the artefacts present in the SMDI have been mitigated. Moreover, when comparing the “corrected” SMDI derived from SSM with the “corrected” SMDI derived from RZSM, the results show that the SMDI based on RZSM is more representative of the hydric stress level of the plants, given that the RZSM is better suited than the SSM to describe the moisture conditions at the deeper layers, which are the ones used by plants during growth and development.
The study provides an insight into obtaining robust, high-resolution derived drought indices based on remote-sensing derived SSM and RZSM estimates, for the improvement of resilient irrigation techniques. With the SSM-derived SMDI being currently used operationally and the RZSM-derived SMDI planned to be available soon, any improvement in the SMDI estimates will further improve irrigation advice.
ABSTRACT
Spaceborne radars for oceanography and hydrology use near nadir ranges because the backscattering signal is higher for these incidence angles. The sensors SWALIS (Still Water Low Incidence Scattering) and KaRADOC (Ka RADar for Ocean measurements) are developed for airborne radar measurements in Ka Band [1]. These sensors are dedicated to oceanography and hydrology applications for the climate purposes. Nevertheless, there are some slight differences between SWALIS and KaRADOC.
On the one hand, the objective of the SWALIS airborne radar system is to perform backscatter measurements (amplitude of the reflection coefficient of a rough surface: 0) at low incidences to characterize areas of hydrological area of interest in low wind conditions [2]. These measures are used to study the following points:
• inhomogeneities of roughness of hydrological surfaces and edge effect,
• conditions for obtaining cases of “dark water”,
• contrast water / banks,
• partially covered areas (crops, flooded areas).
Thus, the SWALIS sensor is intended to support calibration and validation operations for the future SWOT mission.
On the other hand, the KaRADOC radar system is based on the SWALIS architecture and is more dedicated for measuring ocean surface current velocity. We operated the KaRADOC sensor at 12°-incidence angle for the DRIFT4SKIM campaign which was organized in November 2018 [3]. In addition, included in the SUMOS campaign (Surface Measurements for Oceanographic Satellites) during the last February and March months, we used KaRADOC to measure radar echoes at multi-incidence angles and also to validate the SKIM concept.
To make the measurements physically interpretable (and more specifically for the SWALIS sensor), it is necessary to perform sensor calibration operations. This communication describes the procedures we develop to achieve calibration coefficients which are applied either to SWALIS or KaRADOC sensors. The calibration campaign is performed at the MERISE (MontErfil station for RadIo and remote SEnsing) station located near Monterfil (Ille-et-Vilaine, France). The calibration bench consists of a 15-meter-high tower on which we install the radar system (see Fig. 1) and a mast supporting the calibration targets i.e. trihedral corners (see Fig. 2). In order to comply with the free space conditions, we place the calibration targets 351 meters from the radar system (see Fig. 3).
The first step is to define the radar parameters applied during airborne measurements. We remind that we are using a leaky-wave antenna: the tilt angles therefore depend on the frequency. Thus, for this calibration campaign, we define 7 frequencies. Next, we define a set of 13 trihedral corners which are measured for the 7 chosen frequencies. For each frequency, the antenna is directed precisely towards the trihedral corners in order to obtain the maximum response. As the maximum RCS is well-known, we can relate the power reflected by the trihedral to its RCS. We finally apply a linear regression procedure to the measurements performed. We therefore obtain a linear relationship between the received power and the RCS of the trihedral as described in the radar equation (see Fig. 4). The calibration coefficients obtained are valid for a given distance. It is therefore necessary to transform these coefficients relative to the measurement distance during the airborne campaigns.
In the developed version of our communication proposal, we will come back in more detail to the calibration procedure: processing of the recorded radar data, modeling of the relationship between the reflected power and the calibration target RCS and the description of the calibration coefficients obtained.
REFERENCES
[1] MÉRIC S., LALAURIE J.-C., LO M.-D., GRUNFELDER G., LECONTE C., LEROY P., POTTIER É., SWALIS/KaRADOC: an airplane experiment platform developed for physics measurement in Ka band. Application to SWOT and SKIM mission preparations, In proceedings of 6th Workshop on Advanced RF Sensors and Remote Sensing Instruments & 4th Ka-band Earth Observation Radar Missions (ARSI’19 & KEO’19), ESA/ESTEC, 11-13 November 2019, Noordwijk, The Netherlands.
[2] KOUMI, J.-C., MÉRIC S., POTTIER É., GRUNFELDER G., The SWALIS project: First results for airborne radar measurements in Ka band, In proceedings of European Radar Conference (EuRAD 2020), Jan 2021, Utrecht, Netherlands.
[3] MARIÉ L., F. COLLARD, F. NOUGUIER, L. PINEAU-GUILLOU, D. HAUSER, F. BOY, S. MÉRIC, C. PEUREUX, G. MONNIER, B. CHAPRON, A. MARTIN, P. DUBOIS, C. DONLON, T. CASAL, AND F. ARDHUIN; Measuring ocean surface velocities with the KuROS and KaRADOC airborne near-nadir Doppler radars: a multi-scale analysis in preparation of the SKIM mission, Ocean Sci., 16, 1399–1429, 2020, https://doi.org/10.5194/os-16-1399-2020
In this paper, we focus on understanding the changes in the river environment of two physically and geomorphologically comparable rivers: the river Mura and its course through north-eastern Slovenia, and the river Vjosa in southern Albania. Both rivers share a common historical, geomorphological, and economic background. The difference between the two rivers is that the Mura is heavily dammed in its upper part in Austria and regulated in some sections throughout its course, while the Vjosa has remained almost natural. Based on our interdisciplinary approach combining remote sensing, anthropological, and geographical research within the EOcontext project*, we try to understand how human interactions have modified the river environment and how the river environment has affected people’s lives.
In order to answer this, we used multilevel change detection and time-series approaches to see and predict how human-induced influences (especially hydropower plants) can affect river environments. Heterogeneous river patterns in different geographic and topographic contexts were automatically mapped. At the same time, we conducted fieldwork and collected a wealth of in situ data as we are also interested in whether and how people living in these two riverine landscapes perceive, experience, live, and make sense of these changes.
Our workflow consists of three stages. First, we performed land use/land cover time series to detect intra-annual changes in surface water extent of the two different rivers. To do this, we used Landsat data (up to 2015) and Sentinel-2 imagery (from 2015 until present) of the rivers and wider riparian areas. In this way, we gathered a comprehensive overview of changes over the period of the last four decades. We used relatively simple change detection algorithms (based on classifications using SVM and RF approaches) to identify the areas of the most extensive change in the course of both rivers. For the Mura we considered four land cover classes (river, agriculture, mixed forest, and urban) and for the Vjosa we used five (the same as for Mura, with the addition of gravel). The gravel bars on the Mura are not visible with the 30 m Landsat resolution and were therefore not included in the classification. Even in the case of Vjosa, gravel bar classification is problematic as gravel represented a very small part of the training data.
Second, we applied the spectral signal mixture analysis to achieve more precise, subpixel mapping, considering only three main land cover classes (gravel, vegetation, and surface water). Each land cover class of interest was represented with an endmember or spectral signature of a pure pixel containing only the selected land cover class. To increase the separability of the land cover classes, we calculated several spectral indices and used them along with the reflectance of the spectral bands for the SSMA (MSAVI2, NDVI, NDWI, and MNDWI). The subpixel approach enabled more accurate mapping of riverine landscapes and was especially key in gravel bar mapping. The extent of gravel bars was monitored as a sign of the natural dynamics of river processes.
Third, the results of the remote sensing analysis were correlated and compared with the results of the field research. Based on the several field visits in the two areas under study, we questioned whether and how the inhabitants of these two areas perceive changes in their geophysical environment. Drawing on the many years of research experience and on the basis of semi-structured interviews with 40
interviewees, we identified changes in the riverine landscape in which they live. The data from the field research were analysed in their specific social, cultural, historical and political context.
The results of the remote sensing analyses are presented in the form of land use change maps, showing the extent of land use change and the extent of gravel bars. On Mura, the presence of different land cover classes is very uniform and stable. This is to be expected as the Mura is regulated in this area. However, hydrological data show that the Mura has lost the stability of water flow on its way through Slovenia, that the water level and the level of groundwater decreased due to the heavy damming on the upper and middle course, and, what is particularly problematic, that a deepening of the river bottom can be observed. But these present changes cannot be adequately detected with remote sensing analyses. In the case of the Vjosa, there is much greater variability in the presence of the different land cover classes over the years, as the river is much more dynamic and almost intact. The results also show a high correlation between the water surface area identified in the EO data and the water level measured in situ at the gauging station. Results from the field show that people observe most of the changes in their environment that we detected using the EO data though they understand and explain them in the language of their respective socio-cultural environments. The proposed methodology can be used to increase quantitative knowledge of river forms and processes over time. We also believe that combining different social, historical, geographical, hydrological, and ecological aspects adds more value to the understanding of the remote sensing results. Therefore, we stress the importance of contextualising the obtained spatial results.
*EOcontext (Contextualization of EO data for a deeper understanding of river environment changes in Southeast Europe) project is funded by the Government of Slovenia through an ESA contract under the EO science for society permanently open call.
Once, the Aculeo lagoon located in the central south of Chile represented together with the Maipo River, one of the two main water sources in the commune of Paine, Chile. Nevertheless, since the last decade, the Aculeo lagoon presented severe decrease in its water level, reaching a total dry up in May 2018. This happened from 2009 onwards. First, it lost 50% of its water surface in 4 years (2010-2014). During the following 2 years no further decrease was observed. But in 2017 its water surface reduced for another 50%. Finally in 2018 it completely disappeared. In order to explain this phenomenon, the aim of the present study was investigate parameters which might have forced its disappearance and can be observed from space and analyzed be remote sensing.
Therefore, in a first step, we calculated, visualized and analyzed the surface variations of Aculeo lagoon by applying automated pixel differentiation (Normalized Difference Water Index) from satellite images acquired by Landsat 7 and Landsat 8 between 2006 and 2019. Additionally, in order to analyze impact of rainfall and temperature variations on water level changes, Pearson correlation coefficient was calculated. In a second step, we furthermore added agriculture related parameters, such as evapotranspiration and watering, because the Aculeo sector is mainly characterized by agriculture activities. Therefore, we decided to apply the SEBAL (Surface Energy Balance Algorithm for Land) algorithm, which allows modelling of evapotranspiration, biomass growth, and water deficit considering soil moisture. In our case, the model was initiated using Landsat 7 satellite images, a digital elevation model and some climatic data collected from meteorological stations nearby the study area, as already required during the first step.
As a first result a direct correlation between the water surface variations of the Aculeo lagoon and rainfall was detected. In general, precipitations shown a continuous deficit since 2009, which in fact coincides with the so-called mega drought that affects great parts of the Chilean central south. Pearson correlation coefficient shows a positive correlation between decrease in rainfall and disappearance of Aculeo lagoon. The highest positive correlation coefficients can be detected in 2010, 2015 and 2018, which coincides with significant water surface reduction. Temperature variations do not have significant impact on the disappearance of the lagoon, although an increasing contamination was detected due to eutrophication process, which can be correlated to higher temperature average during study time. Nevertheless, Pearson correlation coefficient for temperature and water surface reduction is negative for all years. This means that temperature variations did not play a significant role in the disappearance of Aculeo lagoon.
With respect to land use in the vicinity of the Aculeo lagoon, due to the results obtained from SEBAL, it can be stated out that it remained quite similar over the years as agriculture is a very important source of income for the population. Nevertheless, there is evidence that people decided to plant crops that do not require as much water to grow and be harvested, such as wheat, oats, grapes, and some citrus. This is particularly notable as crops with the highest evapotranspiration are decreasing, and thus, those crops with lower evapotranspiration became more present. Furthermore, there is land, where agriculture completely disappeared as agriculture became less profitable.
Overall, it can be concluded that the Aculeo lagoon dried out due to a significant precipitation deficit for almost 10 years and that the overexploitation of land by agriculture activities did an important contribution, too.
Renewable green energy will be the most important part of energy development in the twenty-first Century, with Photovoltaic (PV) being considered as a key technology for this kind of energy supply. Monitoring and evaluating PV modules of power plants is of great importance to maintain and optimize the efficiency of solar energy systems, and reduce production costs for PV power plant operators. Earth Observation (EO) can acquire multitemporal information with different sensors on these targets of interest. Previous studies have shown the ability to detect PV areas from multispectral data (Malof et al., 2016, Yu et al., 2018) or hyperspectral data (Ji et al., 2021a). Since the reflectance of PV panel highly relates to the panel absorbed solar energy, hyperspectral data also have great potential to monitor the soiling status and process.
Airborne HySpex data was collected over Oldenburg, Germany, with two cameras covering the spectral ranges of visible near-infrared (VNIR) and short-wave infrared (SWIR) region. The VNIR sensor acquires 160 bands at a spatial resolution of 0.6 m. The SWIR sensor covers the spectral range in 256 channels at a spatial resolution of 1.2 m. Previous studies detected the spectral variation of PV modules for different detection angles with goniometer measurements, specifically including 61 measurements that cover zenith and azimuth angles of 0° to 75° and 0° to 330°, respectively. Data show that the BRDF effect of the reflectance of PV panel even affects the value of Hydrocarbon Index (HI), which is an important spectral feature of PV modules and thus could influence the detection accuracy (Ji et al., 2021). For the PV power plant at Oldenburg, Germany, a Digital Elevation Model (DEM) derived from the 3K camera is available, showing the panel elevation changes. With the difference of elevation of each PV system lines, we can derive the orientation angle of the PV modules. This can be used in turn to analyse the relationship between orientation angle and PV modules. In this study, we use airborne hyperspectral data in conjunction with a DEM and available ground truth on PV panels location and their area to be used as a reference, in order to investigate the potential spectral variation of PV modules with different orientation angles.
First, we applied the PV coverage vectors previously derived by Ji et al. (2021) to HySpex data and collected all spectra. Subsequently, we applied these vectors to DEM data, calculated the difference between the two sides of a panel, and obtain the average elevation difference. The elevation difference of the PV panels was then used to calculate their orientation angles associated with their width. Finally, a regression was performed between the spectra and the respective PV orientation angles and the result was analysed. In order to better study the spectral variability of PV modules, many factors need to be considered. One of them is the different installation angles of PV modules on the roof or PV system. The study aims at gaining a better understanding of the spectral variation of PV modules and in general to evaluate the potential of using hyperspectral data for PV module monitoring. Further research can be conducted with a broader and deeper knowledge of PV spectral variability.
References
• Malof, J. M., Bradbury, K., Collins, L. M., & Newell, R. G. (2016). Automatic detection of solar photovoltaic arrays in high resolution aerial imagery. Applied energy, 183, 229-240.
• Yu, J., Wang, Z., Majumdar, A., & Rajagopal, R. (2018). DeepSolar: A machine learning framework to efficiently construct a solar deployment database in the United States. Joule, 2(12), 2605-2617.
• Ji, C., Bachmann, M., Esch, T., Feilhauer, H., Heiden, U., Heldens, W., Hueni, A., Lakes, T., Metz-Marconcini, A., Schroedter-Homscheidt, M. and Weyand, S., 2021. Solar photovoltaic module detection using laboratory and airborne imaging spectroscopy data. Remote Sensing of Environment, 266, p.112692.
The world’s first large-scale offshore wind farm was installed at Horns Rev in the North Sea in 2002 – twenty years ago. Since then, the offshore wind energy sector has grown immensely to become a global business, which plays a major role for the green energy transition. The global offshore wind energy capacity was 35 GW in 2020 and offshore wind is considered to have the biggest growth potential of any renewable energy technology (Global Wind Energy Council, 2021). Wind turbines are getting bigger and bigger in terms of capacity, height, and blade size and new technologies are currently emerging such as floating offshore wind turbines.
Since observations of met-ocean parameters offshore are sparse, the wind energy industry relies largely on atmospheric modeling and short measurements campaigns for the planning of future wind energy projects. The use of EO data sets and derived variables is not yet widespread in this community and the learning curve for exploiting such data sets remains steep. As part of the H2020 project e-shape (https://e-shape.eu/), researchers from the Technical University of Denmark, Dept. of Wind Energy (DTU Wind Energy) have established co-design cycles with users from the wind energy industry. The objective is to better understand the industry views upon the usefulness and the usability of EO-based data sets – primarily wind maps retrieved from SAR and scatterometers and combinations of the two. We will present the main insights gathered from this work along with our most recent research and developments of satellite-based products tailored to wind energy applications.
Thanks to an almost uninterrupted supply of satellite SAR scenes from the European Space Agency – from the ERS-1/2, Envisat, and Sentinel-1 A/B missions – we have explored the potential of EO-based information for wind energy applications for two decades. At the earliest stages, our research was case oriented, as only a few wind farms existed and the amount of available SAR images was small. Nevertheless, it became evident how the impact of large offshore wind farms on the local wind conditions can be observed and quantified from SAR imagery (Christiansen & Hasager, 2003). The spatial extent of wind farm wakes, i.e. regions with reduced wind speed and increased turbulence, can be up to 100 km under the ideal atmospheric conditions. Today, we have hundreds of thousands of SAR scenes at our disposal and wind farm wake analyses are performed in a systematic fashion for many wind farms in sequence. This has led to new insights about the impact of e.g. coastal wind speed gradients, wind farm layouts, and turbine densities within the farms on the wind climate in the vicinity of large wind farms.
The fast growing archives of satellite SAR scenes also offer an opportunity to perform statistical analyses in order to map the available wind energy potential, or resource, over large offshore areas. Wind resources over the European seas have been mapped in connection with the New European Wind Atlas (Hasager et al. 2020) and annual updates of these maps are foreseen. The wind resource maps represent the outcome of several processing steps, which are performed in an automated fashion: 1) download of SAR scenes and ancillary data sets, 2) inter-calibration of the Normalized Radar Cross Section originating from different SAR sensors, 3) wind speed retrieval using a Geophysical Model Function (Hersbach et al. 2010), 4) re-projection to a uniform lat/lon grid, and 4) wind resource estimation. The procedure can be applied to any location in the world including the emerging offshore wind energy markets e.g. in Asia and the Americas.
Maps showing instantaneous wind fields retrieved from SAR imagery as well as the wind resource maps calculated over Europe are available through the Global Wind Atlas Science Portal at https://science.globalwindatlas.info/ (see also Figure 1). So far, the EO-based data sets are mostly browsed and downloaded by users from academia, including DTU’s own students and staff. In order to make the service more attractive for users in the wind energy industry, we have established co-design cycles with three types of industry users: wind farm developers, offshore wind consultancies, and providers of wind data to the industry. Representatives of these user categories have been interviewed and confronted with a prototype of our EO-based service. Feedback gained from the user interviews have been structured and the following cross-cutting requirements have been identified:
• Easy-to-read documentation of the EO-based data sets is needed e.g. a blog, explainers, and illustrative examples.
• EO-based parameters should come with quality flags e.g. indicators of bright targets, bathymetry effects, and atmospheric stability conditions.
• User-defined time series for specific points should be easy to extract in standardized formats - to be used in combination with other wind data sets.
• Co-located wind and wave height information is desired; especially for floating offshore wind energy.
Work is ongoing to improve the EO-based service according to the industry user’s inputs (Karagali et al. 2021). In the longer term, it will also be necessary to establish a sustainable business model where the costs associated with the service delivery are covered by the end users. Handling the TB of data associated with the processing of SAR data and scatterometer wind products drives an ever-increasing need for high performance computing and storage capacity. This represents another focus point of the e-shape project as well as for the EuroGEO and the Global Earth Observation System of Systems (GEOSS) communities.
Acknowledgements
The project ‘EuroGEO Showcases: Applications Powered by Europe’ (e-shape) has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement 820852. The European Space Agency (ESA) is acknowledged for satellite SAR scenes and ASCAT wind data are obtained from Copernicus Marine Service (CMEMS).
References
Christiansen, M. B., & Hasager, C. B. (2005). Wake effects of large offshore wind farms identified from satellite SAR. Remote Sensing of Environment, 98, 251-268. https://doi.org/10.1016/j.rse.2005.07.009.
Global Wind Energy Council (2021). Global Offshore Wind Energy Report 2021. 136 pp. Available online at https://gwec.net/global-offshore-wind-report-2021/.136.
Hasager, C. B., Hahmann, A. N., Ahsbahs, T. T., Karagali, I., Sile, T., Badger, M., & Mann, J. (2020). Europe's offshore winds assessed with synthetic aperture radar, ASCAT and WRF. Wind Energy Science, 5(1), 375–390. https://doi.org/10.5194/wes-5-375-2020.
Hersbach, H. (2010). Comparison of C-Band scatterometer CMOD5.N equivalent neutral winds with ECMWF. Journal of Atmospheric and Oceanic Technology, 27(4), 721-736. https://doi.org/10.1175/2009JTECHO698.1.
Karagali, I., Badger, M., & Hasager, C. (2021). Spaceborne Earth Observation for Offshore Wind Energy Applications. Geoscience and Remote Sensing (Igarss), Ieee International Symposium, 172–175. https://doi.org/10.1109/IGARSS47720.2021.9553100.
Satellite Global Precipitation Mission (GPM) and the combined state of the art Integrated Multi-satellitE Retrievals for GPM (IMERG) data are used to estimate the risk of rain erosion at wind turbines. Rain erosion at wind turbines cause a loss in profit at several wind farms. The power production is reduced for turbines operating with eroded blades (Bak et al. 2020). Repair takes place on average every 8 years and the cost for repair is high, in particular at offshore sites (Mishnaevsky and Thomsen, 2020).
The rain impinging on the leading edge of the blades causes damage. It typically starts near the tip of the blades where the blade speed is the highest, thus the closing velocity, i.e. the impact speed between the drops and the blade is the highest. Over time erosion will also occur further inwards at the blades. Roughly, the outer 1/5 of the blade may be affected by erosion. The eroded blades cause a loss in power production due to poorer aerodynamic performance. Blade repair can be done to limit the aerodynamic loss.
The current study focuses on a method to predict the risk of rain erosion at wind turbines using GPM and IMERG data as input to the blade damage model. The blade damage model was established through analysis of laboratory experiments with controlled rain fields and blade speeds (Bech et al. 2018). The rain erosion testing of blade damage is an accelerated method. For a specific blade speed and rain rate (or drop size), the damage is observed by inspection of the blade. Thus to calculate the rain erosion risk, the damage increment model is summing up the many events during time. This is representative of a typical leading edge protection coating.
Assuming the same type of leading edge protection coating at the actual wind turbines, the expected damage will be estimated using the wind speed as input, and a wind turbine power curve (translating the wind speed to rotations per minute for the turbine), the size of the rotor (or length of blades) and the rain events.
Due to the lack of local rain observations at most wind farms, alternative rain data are used for the mapping of the rain erosion risk (Hasager et al. 2021). The advantage of GPM satellite data is their global coverage. Also, the data are homogeneous, standardized products for several years available both on land and offshore. See https://gpm.nasa.gov/data/directory (level 3 data, final run, 30-minutes).
The rain erosion risk analysis is done for an offshore floating wind farm in Portugal in the Atlantic Sea. The wind farm is WindFloat Atlantic. In addition, an analysis is done for a nearby landsite in Portugal with available meteorological data on wind speed and rain intensity at 10-minute temporal resolution. The results on estimated lifetime are also done using ERA5 (hourly data) as input for both areas.
In summary, the results compare well with estimated lifetimes around 3 to 6 years on the land site and shorter lifetime at the offshore site.
Acknowledgements for funding support for the ESA project ARIA2 and the Innovation Fund Denmark project EROSION (grant 6154-00018B). GPM and IMERG data are from NASA. Meteorological data are from IPMA, the national meteorological, seismic, sea and atmospheric organization of Portugal. ERA5 data are from ECMWF, the European Centre for Medium-Range Weather Forecasts.
References:
Bak C, Forsting AM, Sørensen NN. (2020). The influence of leading edge roughness, rotor control and wind climate on the loss in energy production. Journal of Physics: Conference Series. 1618(5). 052050. https://doi.org/10.1088/1742-6596/1618/5/052050
Bech JI, Hasager CB, Bak C. (2018). Extending the life of wind turbine blade leading edges by reducing the tip speed during extreme precipitation events, Wind Energy Science, 3/2, pp. 729-748
Hasager CB, Vejen F, Skrzypinski WR, Tilg A-M. (2021). Rain Erosion Load and Its Effect on Leading-Edge Lifetime and Potential of Erosion-Safe Mode at Wind Turbines in the North Sea and Baltic Sea. Energies. 14(7). 1959. https://doi.org/10.3390/en14071959
Mishnaevsky L, Thomsen K. (2020). Costs of repair of wind turbine blades: Influence of technology aspects. Wind Energy. 23(12):2247-2255. https://doi.org/10.1002/we.2552
Monitoring of mining impact has become increasingly important as the awareness of safety and environmental protection is rising. For example, two catastrophic dam collapses occurred in Brazil in 2015 and 2019. The tailing outflow caused miserable loss of human lives (205 deaths and 122 missing combined) and of countless properties. An appropriate monitoring scheme is necessitated to legally activate, reactivate, and terminate mining operations.
Our project Integrated Mining Impact Monitoring (i2Mon), funded by European Commission – Research Fund for Coal and Steel, intends to monitor mining-induced impact, in particular, of ground movement. The monitoring system comprises terrestrial measurement and remote sensing: levelling, GPS, LiDAR scanning, UAV survey, and SAR interferometry. The aim is to launch an interactive GIS-based platform as an early warning and decision making service for mining industry.
Our presentation focuses on Work Package 2 – Space and Airborne Remote Monitoring. This package is to develop a SAR-based approach to monitor the mining-induced ground movement over an extensive area at millimetre level. We will first illustrate the monitoring scheme and approaches of estimating ground movement by advanced SAR interferometry. The first test site is a deactivated open-pit mine in Cottbus, Germany owned by Lausitz Energie Bergbau AG (LEAG). The whole area is being reconstructed into a post-mining lake. Therefore, monitoring the mining impact is in particular crucial for the safety. The second test site is located in Poland, where the underground mining operated by POLSKA GRUPA GÓRNICZA (PGG) began in June 2021. We must monitor the in-situ ground movement carefully as part of the influenced area covers settlements. We have analysed the ground movement across the open-pit and underground mines by implementing advanced SAR interferometry. The crucial parameters include stepwise movement series, instantaneous velocities and accelerations, and significance index. The results will be compared with local measurement such as GPS recordings collected alongside corner reflectors. All the data will be finally integrated into DMT’s platform – SAFEGUARD.
Earth Observation based energy infrastructures to support GIS-like energy system models
S. Weyand 1, M. Schroedter-Homscheidt 1, Th. Krauß 2
1 DLR, Institute of Networked Energy Systems, 26122 Oldenburg, Germany – (Susanne.Weyand@dlr.de, marion.schroedter-homscheidt)@dlr.de
2 DLR, Remote Sensing Technology Institute, 82234 Wessling, Germany – Thomas.krauss@dlr.de
Keywords: photovoltaic (PV) detection, Earth observation (EO), energy system analysis, airborne, satellite, energy infrastructure
Due to increasing urbanization worldwide, the increasing energy demand of urban residents, and the lower prices for photovoltaic (PV) and solar thermal modules, the number of plants in operation has increased significantly in recent years. Authorities and electricity grid operators are supporting the installation of solar power plants in order to achieve the Federal Government’s strategies for reducing CO2 emissions and primary energy consumption by 80% until 2050. For load modelling and the generation of demand and production statistics and planning, they need up-to-date roof usage and coverage information, as well as location data of the plants. There is also an increase in PV on roofs of residential and commercial buildings. However, many of these systems are not exactly registered and publicly available databases of solar modules are not up to date.
Monitoring strategies of solar plants are interesting for energy forecasting models in research, urban planning and industry. Currently, energy forecasting models are often based on community-based OpenStreetMap data (e.g. Alhamwi et al. 2018). However, these are partly faulty, have insufficient detailed information or have very different regional accuracy. Therefore, we start to collect energy-specific data with Earth observation techniques. Questions of energy system analysis are, for example, the modelling of load profiles in the electricity system.
Our focus is on energy load quantification in urban areas such as buildings and renewable energy sources detection, such as photovoltaics and solar thermal energy devices from flight and satellite data. We look into characteristic detection features of PV and solar thermal systems from airborne remote sensing data. At the institute, we have detailed knowledge of PV module construction from PV module research and contribute solar radiation data from the Copernicus Atmosphere Monitoring Service (CAMS) (Schroedter-Homscheidt et al., 2021). The extracted, characterized and geocoded PV and solar thermal systems are then used, for example, in self-developed energy modeling software.
Solar modules are built from a combination of different materials and minerals. Therefore, ultra-high-resolution airborne optical (Kurz, 2009) and hyperspectral (DLR, 2016) data was collected in the years 2018 and 2019 over the study region Oldenburg and Ulm. Both data sets are collected with the DLR OpAiRS System, mounted at Dornier Airplane and post-processed by DLR Remote Sensing Technology Institute colleagues. Atmospheric and georeferenced correction is done by the ATCOR 4 Processor (Richter et al., 2012).
Deep learning methods, so-called convolutional neural networks (CNNs), are used for optical data analysis to identify energy infrastructures, such as the detection of photovoltaic modules, and separate them from solar thermal and thin film modules.
Available laboratory spectra from goniometer measurements of mono-, polycrystalline and thin film photovoltaic modules (Gutwinski et al., 2018), as well as characteristic peak investigation, such as the normalized hydrocarbon index (nHI) (Clark et al., 2003 and 2009) of the ethylene vinyl acetate (EVA) layer of solar modules (Czirjak, 2017), were used to train a spectral indices algorithm for photovoltaic (PV) module detection. PV extracted by trained index analysis show validation accuracies up to 90.6%, but is restricted to mono- and polycrystalline photovoltaic module detection (Ji et al., 2021).
A definition of characteristic peaks for thin-film modules detection is ongoing. Additionally, based on optical flight data, building height, angle and the orientation of roof surfaces, as well as an ultra-high-resolution digital surface model for region of Oldenburg were generated. The impact on modeling results, in comparison with OpenStreetMap input data, is investigated.
Results based on high-resolution flight data can be further applied to commercial and free satellite data sets such as WorldView, Sentinel-2 and EnMap to enable large-scale, national or even European use. The balance between the loss of information due to the change in spatial resolution of the satellite data and the simultaneous gain of information is quantified and evaluated with regard to the relevance in energy system models.
References:
1. Alaa Alhamwi and W. Medjroubi and T. Vogt and C. Agert (2018); Modelling urban energy requirements using open source data and models, Applied Energy, Vo. 231, p. 1100-1108, DOI: 10.1016/j.apenergy.2018.09.164
2. Clark, R. N., G. A. Swayze, K. E. Livo, R. F. Kokaly, S. J. Sutley, J. B.Dalton, R. R. McDougal, and C. A. Gent (2003b), Imaging spectroscopy: Earth and planetary remote sensing with the USGS Tetracorder and expert systems, J. Geophys. Res., 108(E12), 5131, doi:10.1029/2002JE001847
3. Clark R., Curchin J. M., Hoefen T. M., Swayze G. A., 2009: Reflectance spectroscopy of organic compounds: 1. Alkanes, Journal of Geophysical Research E: Planets, Volume 114 (3); doi:10.1029/2008JE003150, http://pubs.er.usgs.gov/publication/70034984
4. D Czirjak, “Detecting photovoltaic solar panels using hyperspectral imagery and estimating solar power production,” J. Appl. Remote Sens. 11(2), 026007 (2017), doi: 10.1117/1.JRS.11.026007.
5. DLR Remote Sensing Technology Institute (IMF). (2016). Airborne Imaging Spectrometer HySpex. Journal of large-scale research facilities, 2, A93. http://dx.doi.org/10.17815/jlsrf-2-151
6. Martin Gutwinski, Prof. Dr. Carsten Jürgens, Dr. Andreas Rienow (2018); Analysis of the spectral variability of urban surface materials based on a comparison of
laboratory- and hyperspectral image spectra; unpublished Master Thesis at Ruhr-University Bochum, Geography Department, Geomatics/Remote Sensing Group
7. Ji, C., Bachmann, M., Esch, T., Feilhauer, H., Heiden, U., Heldens, W., Hueni, A., Lakes, T., Metz-Marconcini, A., Schroedter-Homscheidt, M. and Weyand, S., 2021. Solar photovoltaic module detection using laboratory and airborne imaging spectroscopy data. Remote Sensing of Environment, 266, p.112692.
8. Kurz, Franz (2009) Accuracy assessment of the DLR 3K camera system. In: DGPF Tagungsband, 18, Seiten 1-7. Deutsche Gesellschaft für Photogrammetrie, Fernerkundung und Geoinformation. DGPF Jahrestagung 2009, 2009-03-24 2009-03-36, Jena. ISSN 0942-2870.
9. R. Richter and D. Schläpfer, “Atmospheric / Topographic Correction for Airborne Imagery”, (ATCOR-4 User Guide, Version 6.2 BETA, February 2012)
10. Schroedter-Homscheidt, M., Azam, F., Betcke, J., Hoyer-Klick, C., Lefèvre, M., Wald, L., Wey, L., Saboret, L., (2021): CAMS solar radiation service user guide, technical report, DLR-VE, CAMS72_2018SC2_D72.4.3.1_2021_UserGuide_v1.
Tailings dams are generally large-scale geotechnical structures and ensuring their stability is of critical importance for safe and sustainable mine waste management. However, assessing dam stability remains a great challenge, and failures of significant scale keep occurring world-wide.
The following characteristics make tailings dams particularly vulnerable to failure: (a) embankments constructed of locally sourced fills (soils, coarse waste, overburden from mining operations and tailings); (b) multi-stage raising of the dam; (c) the lack of standardized regulations governing design criteria; and (d) high maintenance costs after mine closure. Upstream dams, where dam extensions are supported by the tailings themselves, are especially vulnerable to displacements which can trigger failure. The consequences of a dam failure can be severe, not only in the direct vicinity of the dams themselves, but also far downstream. Therefore, dam stability requires continuous monitoring and control during emplacement, construction, operation and after decommissioning.
Interferometric synthetic aperture radar (InSAR) has been applied to the study of many natural and anthropogenic phenomena. The availability of near-global coverage of SAR data collected with the current generation of satellite constellations has allowed for an unprecedented amount of data over mining sites, tailings storage facilities, and downstream waterways. Specifically, the European Union's Copernicus Program maintains a network of satellites, including the Sentinel 1 constellation that has provided open access radar data with medium spatial resolution and short repeat pass intervals since 2014.
We present the applicability of InSAR analyses for monitoring displacements on and around tailings dams for several selected case studies, both intact dams with only expected displacements and case studies on recently collapsed dams. For the latter, we further investigate the potential existence of precursors and the applicability of the inverse velocity approach to predict the date of failure. E.g. for the Brumadinho dam failure in 2019, time series back to 2015 were analyzed, comparing a number of acceleration periods with the one before the failure.
Urbanization and climate change are major tasks for cities in present times but will be even greater challenges in future, considering ongoing urbanization rates and temperature rise. In order to prepare cities for future realities, good practice in urban planning demands scenario development with an appropriate database. With this work, we present a methodology to generate and provide data for planners on residential electricity consumption data on a single building level based on building types and photovoltaic (PV) energy balances to develop strategies to decarbonize the energy mix. Belmopan, the studied city, and Belize need to import a large share of its energy demand from neighboring countries. In this context, decentral PV solutions can contribute to reducing energy dependencies to other countries.
Using information from aerial unmanned vehicle (UAV) orthomosaics, eight residential building types which include four single-family and four multifamily building types, are classified based on a random forest classifier and building specific parameters, such as building footprint area, building height, roof complexity or building footprint shape indices. Through conducting a household survey, statistics on residential electricity consumption in relation to the building type could be identified. Based on DSM information from UAV imagery processed with a structure-from-motion approach and data on solar radiation from the National Solar Radiation Database (NSRDB), the PV energy potential could be determined for each building. Through differencing PV energy potential and building type related energy consumption, PV energy balances are calculated on a single building level for the study areas.
To prove the capability to apply different framework scenarios in this methodology, we compared the effects of installing two PV panels on the best suited field of roof (FOR) for a realistic scenario and fully equipping the best suitable FOR with PV panels to simulate an ideal scenario. In the realistic scenario, an average of 29.5% of the energy demand in residential buildings can be covered through PV, the ideal scenario resulted in an average electricity coverage rate of 148%. In the ideal scenario, obviously building types with large and unfragmented fields of roofs can generate the highest PV energy surpluses, whereas in the realistic scenario, energy consumption determines the PV energy coverage rate. Therefore, socioeconomically weak groups can profit most from this scenario.
The presented methodology shows the ability to test different scenarios and to provide planning-ready data for urban infrastructure planning and therefore, contributes to close the gap between demands on data for urban planning and provided data from remote sensing approaches. Furthermore, the results underline the potential of PV power in Belmopan to significantly decarbonize the energy mix.
To achieve by 2050 the decarbonization goals set by the European Climate Foundation, it is crucial to meet the demand for Critical Raw Materials (CRM) and other commodities necessary for the production and storage of “green” energy (Blengini et al., 2020). Materials such as high purity quartz, rare earth elements (REE), lithium (Li), beryllium (Be), cesium (Cs), niobium (Nb) and tantalum (Ta) can be commonly found in pegmatite rocks. Therefore, the aim of the GREENPEG project is to develop and test an innovative multi-method exploration toolset to apply to both niobium-yttrium-fluorine (NYF) and lithium-cesium-tantalum (LCT) chemical pegmatite types. The final goal is to find outcropping and buried pegmatite deposits within Europe. The exploration toolset is being developed in three European demonstration sites: (i) Tysfjord (Norway); (ii) Leinster (Ireland); and (iii) Wolfsberg (Austria). Distinct exploration methods are being developed at different scales, namely province-, district- and prospect-scales.
This work focus on the province-scale methodology through the exploitation of available Sentinel-1 and Sentinel-2 data from the Copernicus program. The objectives of this work were to: (i) use Sentinel-1 synthetic aperture radar (SAR) images to identify province-scale tectonic structures, such as faults, that may have controlled pegmatite melt emplacement; and (ii) use Sentinel-2 images to directly identify the spectral features of the pegmatite bodies.
First, a satellite image database was built with all the necessary image tiles to cover all demonstration sites at a province-scale. Several criteria were defined for choosing the images during the search, namely: (i) the cloud cover (should be less than 10%), (ii) the vegetation coverage (defined through Normalized Difference Vegetation Index - NDVI computation); (iii) the snow coverage (defined through Normalized Difference Snow Index - NDSI computation); and (iv) the season of the year at the time of image acquisition. These criteria were employed to ensure that all images acquired present the lower cloud, vegetation and snow coverage possible. Sentinel-1 images were selected due to: (i) acquisition of images in the C-band (3.75–7.5 cm) and; (ii) easy integration with Sentinel-2 products. To choose the best Sentinel-1 images several criteria were taken into account: (i) the spatial coverage of the study areas; (ii) the adequacy of the product specifications considering the study objectives; and (iii) acquisition data close to the corresponding Sentinel-2 images already pre-processed. There are several acquisition modes, but the Interferometric Wide (IW) swath with dual-polarization (HH+HV or VV+VH, where H: Horizontal and V: Vertical) was selected due to its adequacy for land applications.
Regarding the pre-processing steps, they were achieved in Sentinel-1 Toolbox (S1TBX) for all Sentinel-1 SAR images, while for Sentinel-2 preprocessing was done using the Semi-Automatic Classification Plugin (SCP) under QGIS software (Congedo, 2016). After geographic clipping of the Sentinel-1 images to province scale, several pre-processing steps were followed, namely: (i) orbit correction; (ii) thermal noise removal; (iii) radiometric calibration; (iv) speckle filtering; (v) terrain correction; and (vi) final geographic trimming. In the case of the optical Sentinel-2 images, depending on the size of the study area, there could be a necessity to produce mosaic images to cover the entire study area. Image pre-processing included masking and mosaic creation, as mentioned before, and the atmospheric correction of the images (considering Dark Object Subtraction -DOS- technique) to obtain surface reflectance values.
Next, the lineaments were automatically extracted from both VV- and VH-polarised images Sentinel-1 images using the LINE algorithm of PCI Geomatica 2018 in a three-stage process: (i) edge detection; (ii) thresholding; and (iii) curve extraction. In each step, several parameters were optimized through a trial and- error method. After the automatic extraction of the lineaments, a visual inspection was conducted in the QGIS software to manually remove all lineaments related to the coastline and human infrastructure. For the Sentinel-2 data, several traditional image processing techniques were employed taking into account the algorithms proposed by Cardoso-Fernandes et al. (2019b): (i) RGB combinations; (ii) Band Ratios; and (iii) Principal Component Analysis – PCA.
Once all unwanted lineaments were removed in the visual inspection step, rose diagrams were built with the mean directions of the extracted lineaments. In Tysfjord, the VV polarization image allowed identifying more lineaments with a NE-SW trend, while the VH polarization enhanced structures along the ENE-WSW direction. However, most of the extracted lineaments are related to mountain ridges or regional structures (especially where the Caledonian nappes outcrop). This allied with the previously removed lineaments along the water and land transition in the original dataset indicates that topography had a large effect on lineament extraction. Several difficulties to the successful application of the traditional techniques to the Sentinel-2 images were identified such as: (i) the snow/vegetation coverage; (ii) the outcrop size versus the spatial resolution of the images; and (iii) the spectral confusion with other within-scene elements (e.g., roads, etc.). These are in line with the constraints identified in similar applications in the Iberian Peninsula (Cardoso-Fernandes et al., 2019a). However, these methods also allowed to identify possible areas of interest for pegmatite exploration. Moreover, the methods previously employed to detect LCT pegmatites also allowed detecting NYF pegmatites in Tysfjord. In some cases (RGB combinations), this meant that the traditional methods presented slight color differences.
The results obtained corroborate the potential of Sentinel-1 and Sentinel-2 data for pegmatite exploration at a province scale. Nonetheless, the Copernicus data needs to be further exploited in the future. For example, additional pre-processing of Sentinel-1 will be achieved to decrease the topographic effect. Also, a spectral library concerning pegmatite samples from the demonstration cases will be constructed and the reference spectra will be used to further refine the employed image processing methods. In the end, the results will be integrated with supervised classification approaches using machine learning algorithms (Teodoro et al., 2021) and with existing province scale radiometry, magnetometry, and electromagnetic data to produce target exploration maps for the three demonstration sites.
References
Blengini, G. A., Latunussa, C. E. L., Eynard, U., Torres de Matos C., Wittmer, D., Georgitzikis, K., Pavel, C., Carrara, S., Mancini, L., Unguru, M., Blagoeva, D., Mathieux, F. & Pennington D. (2020). Study on the EU's list of Critical Raw Materials Final Report. European Commission. https://doi.org/10.2873/11619.
Cardoso-Fernandes, J., Lima, A., Roda-Robles, E., & Teodoro, A. C. (2019a). Constraints and potentials of remote sensing data/techniques applied to lithium (Li)-pegmatites. The Canadian Mineralogist, 57(5), 723-725. doi: 10.3749/canmin.AB00004.
Cardoso-Fernandes, J., Teodoro, A. C., & Lima, A. (2019b). Remote sensing data in lithium (Li) exploration: A new approach for the detection of Li-bearing pegmatites. International Journal of Applied Earth Observation and Geoinformation, 76, 10-25. doi: https://doi.org/10.1016/j.jag.2018.11.001.
Congedo, L. (2016). Semi-Automatic Classification Plugin Documentation. Retrieved from DOI: http://dx.doi.org/10.13140/RG.2.2.29474.02242/1.
Teodoro, A. C., Santos, D., Cardoso-Fernandes, J., Lima, A., & Brönner, M. (2021, 12 September 2021). Identification of pegmatite bodies, at a province scale, using machine learning algorithms: preliminary results. Paper presented at the Proc. SPIE 11863, Earth Resources and Environmental Remote Sensing/GIS Applications XII, SPIE Remote Sensing, doi: https://doi.org/10.1117/12.2599600.
Acknowledgements
This study is funded by European Commission’s Horizon 2020 innovation programme under grant agreement No 869274, project GREENPEG New Exploration Tools for European Pegmatite Green-Tech Resources. The Portuguese partners also acknowledge the support provided by Portuguese National Funds through the FCT – Fundação para a Ciência e a Tecnologia, I.P. (Portugal) projects UIDB/04683/2020 and UIDP/04683/2020 —ICT (Institute of Earth Sciences).
The Copernicus Atmospheric Monitoring Service (CAMS) offers Solar radiation services (CRS) providing information on surface solar irradiance (SSI). The services meet the needs of European and national policy development and the requirements of partly commercial downstream services in the solar energy sector for e.g. planning, monitoring, efficiency improvements, and integration of renewable energies into the energy supply grids.
At present, the service is derived from Meteosat Second Generation (MSG). CRS provides clear and all-sky time series combining satellite data products with numerical model output from CAMS on the optical state of the atmosphere. The clear sky and all-sky products are available from 2004 until yesterday through the CAMS Radiation Service portal and the Atmospheric Data Store (ADS) in the Copernicus portal by making use of the SoDa portal capabilities.
The service quality is ensured through regular monitoring and evaluation of input parameters, quarterly benchmarking against ground measurements and automatic consistency checks.
Variability of solar surface irradiances in the 1-minute range is of interest especially for solar energy applications and such a variability-based analysis can help assess the impact of recent improvements in the derivation of all-sky irradiance under different cloudy conditions. The variability classes can be defined based on ground as well as satellite-based measurements. This study will show the evaluation of the CAMS CRS based on the eight variability classes derived from ground observations of direct normal irradiation (DNI) (Schroedter-Homscheidt et al., 2018).
The CRS service evolution includes its extension to other parts of the globe. The highlights of the framework development towards the operational Implementation, with a focus on HIMAWARI-8 by Japan Meteorological Agency (JMA) will be shown.
References:
CAMS Radiation Service (clear sky): http://solar.atmosphere.copernicus.eu/cams-mcclear
CAMS Radiation Service (all-sky): http://solar.atmosphere.copernicus.eu/cams-radiation-service
Copernicus portal: http://atmosphere.copernicus.eu/
Schroedter-Homscheidt, M., S. Jung, M. Kosmale, 2018: Classifying ground-measured 1 minute temporal variability within hourly intervals for direct normal irradiances. – Meteorol. Z. 27, 2, 160–179. DOI:10.1127/metz/2018/0875
The global distribution of the Cropping Intensity (CI) is critical to our understanding of the intensity of arable land use and management practices on the planet. The widespread availability and open sharing of satellite remote sensing data has revolutionized our ability to monitor large area cropping intensity in an efficient and rapid manner. High accuracy global cropping intensity extraction is a huge challenge due to significant differences in the fragmentation of cropland in different regions, diverse utilization patterns, and the influence of clouds and rain. The existing cropping intensity products have low resolution and high uncertainty, which make it difficult to accurately illustrate the real situation of highly heterogeneous and fragmented areas. This study uses massive multi-source remote sensing data for global cropping intensity mapping. All available images of top-of-atmosphere (TOA) reflectance from Landsat-7 ETM+, Landsat-8 OLI, Sentinel-2 MSI and MODIS during 2016–2018 were used for cropping intensity mapping via the GEE platform. To overcome the multi-sensor mismatch issue, an inter-calibration approach was adopted, which converted Sentinel-2 MSI and Landsat-8 OLI TOA reflectance data to the Landsat-7 ETM+ standard. Then the calibrated images were used to composite the 16-day TOA reflectance time series based on maximum composition method. To ensure data continuity, this study used the MODIS NDVI product to fill temporal gaps with the following steps. First, the 250-m MODIS NDVI product was re-sized to 30-m using the bicubic algorithm. Then, the Whittaker algorithm was applied to the gap filled NDVI time series to smooth the NDVI time series. We included two phenology metrics, mid-greenup and mid-greendown, which were derived as the day of year (DOY) at the transition points in the greenup and greendown periods when the smoothed NDVI time series passes 50% of the NDVI amplitude. An interval starting from mid-greenup and ending at mid-greendown is defined as a growing phenophase, and an interval moving from mid-greendown to mid-greenup a non-growing phenophase (Liu et al., 2020; Zhang et al., 2021). Using this algorithm, the Google Earth engine is used as the data processing platform, and the 5° grid is used as the data processing unit to extract the cropping intensity grid by grid, and finally the world's first set of 30-meter resolution cultivated land cropping intensity data product (GCI30) is developed. The validation results show that the overall accuracy of this data product is 92.9%, which is not only better than the existing cropping intensity data products, but also significantly improves the characterization ability of the spatial details of cropping intensity. GCI30 indicated that single cropping was the primary agricultural system on Earth, accounting for 81.57% of the world’s cropland extent. Multiple-cropping systems, on the other hand, were commonly observed in South America and Asia. We found large variations across countries and agroecological zones, reflecting the joint control of natural and anthropogenic drivers on regulating cropping practices.
The GCI30 dataset is freely available on the Harvard Data Commons (https://doi.org/10.7910/DVN/86M4PO), and the data product will provide scientific data to support the assessment of the global potential of cropland replanting, food yield increase, food security prediction and early warning, and the achievement of UN sustainable development goals such as zero hunger.
In the last decade, a lot of attention has been devoted to crop mapping [1] because of the need to better monitor and manage the food production [2]. This is particularly true for developing countries, where a proper knowledge on the status of agricultural areas is needed to ensure the development of the agricultural infrastructure in accordance with population and economic growth. In this context, this paper presents the activities planned in the framework of the project “Developing a spatially-explicit agricultural database in support of agricultural planning”. The project, supported by the Ministry of Agriculture, Forestry and Fisheries (MAFF) of Japan, will be implemented by the Statistics Division of the Food and Agriculture Organization (FAO) of the United Nations in close collaboration with the Ministries of Agriculture, National Statistical Offices (NSOs), academia and national/regional institutions of geoscience in selected countries of the Asian region. The project aims to increase the availability and quality of farmland information to support the definition of effective schemes of farming incentives, the formulation of smart agriculture/micro finance programs as well as to improve reporting on Sustainable Development Goal (SDG) 2.4.1 for sustainable agriculture. In this framework, the Faculty of Geo-Information Science and Earth Observation (ITC) of the University of Twente will be the main implementing partner of the FAO Statistics Division for the development of a geospatial database of rice farms.
In greater detail, ITC will develop a workflow for mapping rice fields boundaries in Cambodia and Viet Nam, where rice paddy occupy a large portion of the agricultural area in these countries (e.g., almost 80 percent of the harvested area in Cambodia). Although a lot of effort has been devoted in the literature to develop crop delineation methods [3], [4] these peculiar study areas require the definition of a tailored workflow in order to face two main challenges: (1) the cloud coverage which heavily affect most of the optical satellite data acquired over the year, and (2) the fragmented agricultural areas characterized by very small fields (i.e., less than 1 ha). In these conditions, boundaries delineation methods defined for High Resolution (HR) data such as Sentinel 2 might prove more difficult than in areas with large agricultural fields (see Figure 1). [5]. Within the project, we will investigate the use of Very High Resolution (VHR) multi-spectral imagery such as Planet and Worldview 3 data, which guarantee a very high geometrical detail (i.e., from 3m to 30cm spatial resolution). However, the main drawback of these data is their cost, which hampers their use from the operational view point. To provide a workflow which can be used to constantly update the crop boundary database, one of the goal of the project is to exploit the completely full, open and free Sentinel 2 satellite data. While the VHR optical images will be used to define a clear picture of the rice paddy boundaries within the agricultural year [6], the Sentinel 2 sensor guarantee a frequent coverage free of charge that can be employed to constantly update the rice paddy map. Both multitemporal and single-image approaches, based on the integration of Sentinel 2 and the VHR optical images [7], will be explored. Finally, we plan to investigate the possibility of leveraging on VHR and HR Synthetic Aperture Radar (SAR) images to mitigate the severe cloud coverage problem which may hamper the use of the optical images in some seasons.
The expected outputs of the project consist of: (1) the development of spatial layer of rice field boundaries — in the form of geospatial polygons, and (2) the assessment of the suitability of these layers to support farm-level data collection in the form of spatial, qualitative and quantitative attribute information for each farmland parcel/polygon and including farm-level data required to report on SDG 2.4.1. Field campaign activities will be planned to properly validate and refine the crop delineation results obtained. Technical guidelines will also be developed as part of project activities to address issues of scalability, maintenance and update of the spatial farm layers.
[1] G. Weikmann, C. Claudia and L. Bruzzone. "TimeSen2Crop: A Million Labeled Samples Dataset of Sentinel 2 Image Time Series for Crop-Type Classification." IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 14 (2021): 4699-4708.
[2] B. Mishra, L. Busetto, M. Boschetti, A. Laborte, and A. Nelson, "RICA: A rice crop calendar for Asia based on MODIS multi year data, International Journal of Applied Earth Observation and Geoinformation, 103, 102471”, 2021.
[3] Y. T. Solano-Correa, F. Bovolo, L. Bruzzone and D. Fernández-Prieto, "A Method for the Analysis of Small Crop Fields in Sentinel-2 Dense Time Series," in IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 3, pp. 2150-2164, March 2020, doi: 10.1109/TGRS.2019.2953652.
[4] K.M. Masoud, C. Persello, V.A. Tolpekin, “Delineation of Agricultural Field Boundaries from Sentinel-2 Images Using a Novel Super-Resolution Contour Detector Based on Fully Convolutional Networks," in Remote Sens. 2020, 12, 59. https://doi.org/10.3390/rs12010059
[5] M. Wu, W. Huang, Z. Niu, Y. Wang, C. Wang, W. Li, P. Hao, B. Yu,. "Fine crop mapping by combining high spectral and high spatial resolution remote sensing data in complex heterogeneous areas." Computers and Electronics in Agriculture 139 (2017): 1-9.
[6] C. Persello, V.A. Tolpekin, J.R. Bergado, R.A. de By, "Delineation of agricultural fields in smallholder farms from satellite images using fully convolutional networks and combinatorial grouping," in Remote sensing of environment, 231, 111253, 2019
[7] Rao, P.; Zhou, W.; Bhattarai, N.; Srivastava, A.K.; Singh, B.; Poonia, S.; Lobell, D.B.; Jain, M. Using Sentinel-1, Sentinel-2, and Planet Imagery to Map Crop Type of Smallholder Farms. Remote Sens. 2021, 13, 1870. https://doi.org/10.3390/rs13101870
This contribution addresses the identification of phenological phases of wheat, sugar beet, and canola by a complementary set of interferometric (InSAR) and polarimetric (PolSAR) time series derived from Sentinel-1 (S-1) Synthetic Aperture Radar (SAR). Breakpoints and extreme values are calculated during the growing season at DEMMIN (Germany), one of the test sites in the international Joint Experiment on Crop Assessment and Monitoring (JECAM). The in situ data used to validate such frameworks is gathered during various campaigns at the DEMMIN site. As for the first results of the year 2017, a distinction of vegetative and reproductive stages for wheat and canola could be achieved by combining breakpoints and extrema of S-1 features. Certain phenological stages, measured in situ using the BBCH-scale, such as leaf development and rosette growth of sugar beet or stem elongation and ripening of wheat, were detectable by a combination of InSAR coherence, polarimetric Alpha and Entropy, and backscatter (VV/VH). The general tracking accuracy, calculated by the temporal difference between in situ observations and breakpoints or extrema, varied from zero to five days. The largest number of breakpoints and extrema was produced by backscatter time series. Nevertheless, certain micro stadia, such as leaf development of BBCH 10 of sugar beet or flowering BBCH 69 of wheat, could be solely tracked by InSAR coherence and Alpha. In addition, the transition from early to late leaf development as well as early and late rosette development of sugar beet was successfully identified by a combination of InSAR coherence and Kennaugh matrix elements. Therefore, it is assumed that a complementary data base of PolSAR and InSAR features increases the number of detectable phenological stadia of wheat, canola and sugar beet. In regard to ongoing and future research, the challenges of integrating such a tracking framework in an Open Data Cube (ODC) environment to improve its scalability and transferability are discussed as well. Next steps will address the transferability and generalization of observations made during this study, i.e. in the context of a common JECAM experiment that includes crop phenology.
Keywords: PolSAR; InSAR; Kennaugh matrix; time series; Sentinel-1; crop phenology; DEMMIN, ODC
Biophysical and biochemical traits of plants are linked to the photosynthesis and nutrition processes during the growth cycle. There has been significant exploration and improved understanding of such traits in recent years, both from physical measurement and remotely-sensed estimates. Whilst they have explored both magnitude and correlations, such analyses have not directly explored the detailed temporal dynamics and co-dynamics of traits. The era of big data in Earth Observation (EO) changes that, especially the Copernicus Sentinel 2 (S2) mission.
In this study, we estimate and characterise the co-dynamics of the full set of crop canopy traits using the PROSAIL model over multiple years of S2 reflectance data over US, UK and China. 100,000 Random samples are taken from each S2 tiles over those regions for different crop types, taken from publicly available crop classification maps. Each sample S2 spectrum is mapped to each of the PROSAIL canopy parameters using a machine learning approach. As expected, many of the parameters estimated in this way are very noisy as they are only weakly constrained by the information in a single time period S2 observation. We seek then an empirical model of the dynamics of the suite of PROSAIL crop parameters to better estimate these parameters. From these large number of samples of each biophysical parameter, we normalise for phenology variations, then calculate and characterise what we propose as ‘archetype’ temporal patterns of crop traits for a set of crops. We validate these ‘archetypes’ with publicly available datasets. From the archetypes and phenology model, we develop an empirical model for the dynamics of crop biophysical parameters. Such a model allows us to simulate a full time series of hyperspectral reflectance for a given crop, using PROSAIL model and a localised soil constraint. This is the first time that we can simulate crop hyperspectral reflectance with temporal prior information on the biophysical parameters and phenology. Such a model expresses the full information content of the optical EO signal as interpreted by the PROSAIL model. It should allow us to distinguish crop types based on biophysical parameter dynamics and have many other uses. It should also allow us to assimilate data from any optical sensors reflectance measurements.
We demonstrate this empirical model within an ensemble based variational data assimilation system to provide per-pixel robust estimation of this full suite of PROSAIL plant traits from Earth Observation. Using a broad prior information on the phenology and magnitudes of bio-physical parameters, we simulate ensemble of time series of reflectance with our proposed model. Then, time series of satellite observations (S2) are matched to these to provide a posterior estimation of the biophysical parameters, with uncertainty. We test the retrievals over different crops and validate against ground measurements in the US, Germany and China. The validation shows close agreement between the retrieval and the independent ground measurements. We also examine issues of temporal and spectral sampling to show how such factors impact the uncertainty in derived biophysical parameters.
Wheat and maize make up two thirds of the world’s food energy uptake. In China, the North China Plain (NCP), the dominant winter wheat-summer maize double cropping system produces half of the country’s wheat and about a third of its maize. Monitoring this agricultural system, made up of many small farms over a large area is now possible at field scales, by making use of the frequent acquisitions from sensors with adequate spatial characteristics like Sentinel-2 (S2) or Landsat. Mechanistic crop growth models such as WOFOST provides predictions of crop development, functioning and yield as a response to meteorological forcing for a given set of model parameters. Here, we develop a variational data assimilation system that constrains that parameter set at the pixel level with S2-derived biophysical parameters, given a wide a priori parameter distribution, and apply it to yield prediction.
The a priori constraints are developed for the NCP using initial parameter distributions from global literature, calibrated to local conditions using pairings of official statistics on county-level yield and final storage organ mass estimates over Monte Carlo sampling of parameters matched to sample S2-derived LAI trajectories for each county of interest. In this same process, we estimate a “harvest index” that maps from model-simulated storage organ mass to grain yield (as reported in official statistics) and a clumping index.
The pixel-level variational data assimilation then proceeds to localise the parameter distribution with 20m resolution S2-derived LAI trajectories, which also produces localised yield estimates. Given the relatively slow variation of meteorological constraints, an ensemble distribution is efficiently calculated over a coarse resolution grid (10s of km) to define sample plausible trajectories over the a priori model parameter distribution that pertain to many Sentinel-2 pixels.
For each pixel, the LAI ensemble is compared with S2-derived LAI observations, and a bi-squared metric used to to develop the localised weighting on the parameter set and thence associated ensemble grain yields (and its uncertainty). We show the application of this approach to wheat and maize monitoring at multiple scales across the NCP.
The remote sensing community in general, and the land use/land cover classification community in particular, is currently in a stage where potential applications seem endless, high resolution satellite imagery at national levels is available for free, and sophisticated classification algorithms perform better with each new generation. This is especially true for the field of agricultural crop classification, where many studies were published in the past 5 years, that implemented one of many different iterations of deep-learning algorithms. Compared to the rapid development of new algorithms, the model input has been rather stable, mostly consisting of multispectral global sensors like Landsat, Synthetic Aperture Radar (SAR) satellites like Sentinel-1 and very high-resolution optical sensors like Gaofen.
Our study is contributing to the current trend by providing an extensible framework which trains classification models on small spatial and temporal subsets, validates these models on the same subsets, as well as on subsets that are spatially independent, temporally independent and both. Given these four validation scopes, a very robust and differentiated perspective on the generalization of a given model is possible.
We apply our framework on a lower mountainous region in west Germany, where the landscape is mostly defined by forests, pastures and maize fields. We classified maize in a binary classification for four different years ranging from 2016 to 2019. As input data we used monthly averages of Sentinel-1 backscatter coefficients. In our study, we showed that classical pixel-based machine learning classifiers such as random forests showed superior performance within the training scope. Modern deep-learning algorithms such as UNet however, showed significantly better performance on datasets that were from a different year or a different region. We concluded that convolution-based algorithms generalize much more consistently, and show no signs of overfitting on existing geometric field patterns. As such they show great promise in the development of fully operational crop classification models.
Climate change impacts accounted for a decline of 5.5% in wheat yield globally. The decline is expected to continue by another 1.6% due to trends in temperature, precipitation, and carbon dioxide. This study investigates satellite-based approaches for crop growth monitoring and yield forecasting in two different geographically located countries, Poland and South Africa. Cereal production in Africa is very low and wheat crop production accounts for only < 2% of all the wheat grown in the developing countries. South Africa and Ethiopia produce about 80% of the wheat on the continent. However, South Africa remains a net importer of wheat. Drought is one of the major natural disasters affecting agricultural production in both countries. Droughts occur almost each year, usually at different times of the growing season. The yield reduction depends on crop phenology when drought occurs.
The joint project between the two countries, investigated satellite based crop growth monitoring approaches using Terra MODIS, Sentinel2 in conjunction to ground based meteorological data to determine crop water requirements, time for irrigation, as well as crop yield predictions for winter wheat in both countries. Ground data acquired from the same period were used to develop the model for crop yield estimates and irrigation time requirement. The MODIS data consisted of eleven years of observation (between 2003 and 2021) and covered over 100 crop wheat fields.
Field measurements were conducted the Joint Experiment for Crop Assessment and Monitoring (JECAM) sites. The study areas are JECAM Poland: Wielkopolska cropland region in Western Poland, consisting of patched fields of mainly wheat, rape, sugar beet and maize, and JECAM South Africa: Eastern & Western Free State for winter (wheat) and summer (maize) crops. The Elementary Sampling Unit (ESU) for all measurements were a 30 x 30 m square, for the correct characterization of a 10 m Sentinel-2 pixel. The measurements were taken from the north, south, east, and west corners to capture all variation present within sites. Vegetation sampling was designed in a square and samples taken at different locations under representative conditions. The in-situ measurements include in particular: high-resolution spectral measurements covering the VIS-SWIR spectral range: 350 nm – 2500 nm, leaf area index, soil moisture, wet and dry biomass, type of vegetation cover and its phenological stage. The measurements were recorded every 3 weeks during the growing season. Meteorological conditions were continuously measured [i.e. air temperature and humidity, wind speed and direction, precipitation and net radiation]. The ground observations of the winter wheat fields consisted of the crop phenology and crop yield data. The air temperature data was incorporated into the crop yield model, the rainfall was used for validation, and afterwards the model was modified for Sentinel2 data.
The data were analyzed using the accumulated eight days of NDVI (MOD09Q1) and accumulated 8 days’ differences between LST (MOD11A2) and air temperature (TA) from meteorological data. The rapid increase in accumulated NDVI curve occurs at lower accumulated difference between LST and TA (∑LST-TA) and this resulted in high value of yield at the end of the season. During the dry season, however, the accumulated difference between LST and TA increased enormously resulting in lower rate of accumulated NDVI. At good crop growing season, crop heading occurred earlier at lower accumulated difference in temperature (∑LST-TA) than in the dry season and this has a direct response to crop yield. Crop water demand at development stages has been extracted from the analysis of crop growth conditions. The FPAR was used to determine the different crop phenologies. The results have been verified using meteorological data such as rainfall between the different crop phenologies, measured crop yield and ground truthing data.
The results obtained varied depending on the prevailing meteorological conditions in a given year and the fertilization as well as the irrigation methods used. In Poland, during the surveyed period, the highest yield was obtained for 2004 year (100.3 dt ha-1). Winter wheat had already moved into phenological phase heading at just 145 DOY. Values of ∑NDVI and ∑LST-TA were 6 and 45 oC. The average for the other years in which low yields were recorded was for the same parameters corresponding 5 and 80 oC at the heading stage. In the study area in South Africa, it was noted that in 2019, there were worse conditions for wheat crop development and there was a later increase in NDVI, while in 2020, there were better conditions for wheat crop development and there was already an increase in NDVI one month earlier. The highest yields (> 100 dt ha-1) were observed for fields which were tilled and irrigated and cultivar used Pan 3471, while the lowest yield (< 50 dt ha-1) for fields which were irrigated rainfed and fertilizer and herbicide applied once, cultivar used SST 347.
The research work was conducted within the project financed by the National Centre for Research and Development under Contract No. PL-RPA/02/SAPOL4Crop/43/2018., titled "SA Polish collaborative crop growth monitoring and yield assessment system for early warning utilizing new satellite Earth Observations data from Copernicus Programme".
Common Agricultural Policy of the European Union has been continuously implemented since many years with different solutions and approaches, always maintaining the full compliance with the sustainable development premises. Earth Observation techniques, such as remote detection of the crop types is necessary for the correct implementation of the assumptions of the CAP of the European Union by the competent paying agencies in individual EU Member States. For these purposes the most desirable solutions are those tailor-made, which translate innovative image processing methods into operational agricultural monitoring. In many cases cloud computing platforms became suitable places for developing such dedicated applications. Thanks to the immediate and direct access to Copernicus data repository with efficient and dynamically scalable computing power, one of the DIAS platforms - CREODIAS supports the CAP project conduction and tools implementation, such as Sen4CAP and Agrotech. Sen4CAP project was funded by the ESA and performed by a consortium led by the Université Catholique de Louvain, its outcome ready to use software as a service is available on CREODIAS cloud. The solution applies machine learning algorithms on Sentinel-1, Sentinel-2, Landsat 8 data combined with in-situ information from the Land Parcel Identification System (LPIS), in order to generate the following products as cultivated crop type map, grassland mowing product, vegetation status indicator and agricultural practices monitoring product. While the Sen4CAP is already being succsesfully used operationally by many European countries, the Agrotech project is in the evolution phase. The aim of the Agrotech project is to develop algorithms allowing to perform: crop types classification, detection of anomalies in crops for early detection of diseases and pests, biomass increase detection and physical damage detection. The technology will be based on the automatic analysis of combination of different Very High Resolution satellite images of the Earth, using the segmentation methods and machine learning algorithms, in particular deep neural networks, created in the project.
Early warning systems (EWS) play a fundamental role in food security at the global, regional and national scales. Yet, after more than 45 years of Earth Observation, the use of these data by agencies in charge of global food security remains uneven in its results, and discrepancies in crop condition classification regularly occur (Becker-Reshef et al., 2020). It seems more than necessary to strengthen the confidence of decision makers and politicians. Fritz et al. (2019) identified through a survey, different gaps in methods. They highlighted the need to better understand where the input data sets (precipitation and vegetation indices) have discrepancies, and the need to develop tools for automated comparison.
This study aims to respond partially to this need by conducting a comparative experiment of a set of vegetation growth anomalies produced by four Early Warning Systems in West Africa for the 2010-2020 period.
We first reviewed the crop monitoring systems of the Early Warning Systems in West Africa (Nakalembe et al., 2021), with a focus on the vegetation anomalies indices. Four systems were studied: FEWS-NET (Famine Early Warning Systems Network) developed by USAID (US Agency for International Development), the VAM (Vulnerability Analysis and Monitoring) seasonal explorer of the WFP (World Food Program), ASAP (Anomaly hot Spots of Agricultural Production) developed by the JRC (Joint Research Center) and GIEWS (Global Information and Early Warning System on Food and Agriculture) developed by FAO (Food and Agriculture Organization of the United Nations). These four systems contribute to the international CM4EW (Crop Monitoring for Early Warning) which is the GEOGLAM component devoted to countries-at-risk (Becker-Reshef et al., 2020).
Then, a set of vegetation growth anomaly indicators (one per EWS) was selected (NDVI-based), harmonized (standardized), then classified (9 anomaly classes) and compared in time and space. The extreme classes corresponding to less than 15% and more than 85% of the rank percentile values over the 2010-2020 period were respectively labelled as “negative alarm” and “positive alarm” classes (the other classes were grouped under the label “absence of alarm”).
This exploratory work revealed that, despite a common satellite image data set (mainly MODIS NDVI), there are spatio-temporal divergences of the anomaly classes, especially when the seasonal variations are considered. Considering the alarm classes (positive, negative, absence), the use of a cropland mask slightly strengthens the annual similarities between the four EWSs, and thus was used in the following comparisons. The "two by two" analyses displayed similarity between 52% (FEWS-NET and GIEWS) to 70% (VAM and ASAP). The four systems together displayed similarity between 24.5% to 33.7%. In terms of trend over the 2010-2020 period, the systems show no significant trends in terms of percentage of the negative alarm class, except FEWS-NET (p-value < 0.05).
The spatio-temporal divergences could be explained by the diversity of methods used by the different EWSs for NDVI anomaly calculations (products, smoothing, spatial and temporal resolution). In order to go further in the interpretation of these divergences, next step will be to compare these anomalies to other spatial sources of data, such as anomalies of vegetative biomass currently simulated by the AGHRYMET-CIRAD agrometeorological model SARRA-O, or in the longer term to textual information extracted from local newspaper articles using automatic language processing tools.
To conclude, this exploratory study provides new perspectives in the comparison of anomaly products of EWS in West African which remains a challenge in the current environment where more and more products are emerging.
References:
Becker-Reshef I. et al., « Strengthening agricultural decisions in countries at risk of food insecurity: The GEOGLAM Crop Monitor for Early Warning », Remote Sensing of Environment, vol. 237, p. 111553, févr. 2020, doi: 10.1016/j.rse.2019.111553.
Fritz S. et al., « A comparison of global agricultural monitoring systems and current gaps », Agricultural Systems, vol. 168, p. 258‑272, janv. 2019, doi: 10.1016/j.agsy.2018.05.010.
Nakalembe C. et al., « A review of satellite-based global agricultural monitoring systems available for Africa », Global Food Security, vol. 29, p. 100543, juin 2021, doi: 10.1016/j.gfs.2021.100543.
It is well established that, due to a changing climate, global sea level is increasing and that large-scale weather patterns are changing. However, these changes are not geographically uniform and are not steady in time, with short-term variability on a range of time scales (seasonal and inter-annual). It has been shown that, taking into account socio-economic factors, several regions are particularly vulnerable to changes in sea level. At highest risk are coastal zones with dense populations, low elevations, appreciable rates of subsidence and inadequate adaptive capability.
There is a strong imperative to improve awareness of coastal hazards and promote sustainable economic development in marine areas. A key challenge in the implementation of coastal management is the lack of baseline information and the subsequent inability to effectively assess current and future risk.
Access to enhanced regional information on coastal risk factors (sea level, wave and wind extremes) improves planning to protect coastal communities and safeguard economic activity. This information can contribute to increased industrial and commercial competitiveness in the maritime sector, which is heavily dependent on access to accurate relevant oceanographic information. For port operations, sea level heights and tidal currents are vital for operational efficiency. Wind and wave climatologies are fundamental to infrastructure design and operational planning of offshore activities. Coastal tourism and human settlement are equally affected by these parameters and therefore sharing skills and enabling access to currently difficult to obtain satellite data are significant development steps.
The challenge is to provide access to data on sea level, wind and waves and to support understanding of variation in these key ocean features as they change seasonally, inter-annually and due to climate change. It is important to measure and understand these regional and short-term variations, so that appropriate planning and adaption measures can be implemented. This will enable organisations to better plan operational activities, infrastructure development and the protection of communities, ecosystems and livelihoods.
The Coastal Risk Information Service (C-RISe) project has provided satellite-derived data on sea level, winds, waves and currents to support vulnerable coastal populations in adapting to the consequences of climate variability and change.
The project has enabled institutions in the partner countries of Madagascar, Mozambique and Mauritius, to work with the C-RISe products to inform decision-making. It has enabled effective uptake of C-RISe data by commercial and operational sectors in the region and contributes to the improved management of coastal regions, enabling these countries to build increased coastal resilience to natural hazards.
• C-RISe has provided data essential for understanding coastal vulnerability to physical oceanographic hazards not otherwise available to partners, due to the lack of tide gauges in the region and the expertise required to process the satellite data.
• Software and training materials enables partners to validate and analyse these data in ways that are relevant to their specific needs and activities.
• Capacity building increases the understanding of the value of these data in addressing coastal risk. It also increases the number of organisations and individuals capable of working with satellite data, and facilitates work towards the application of data within Use Cases.
• The Use Cases have facilitated operational uptake by the partners, integrating C-RISe data into their work streams and providing examples for dissemination and training.
The project offers several opportunities to expand, including increasing the range of data and information provided; increased geographical coverage; and a wider capacity building remit. In building local capacity and focusing on the development of Use Cases in line with our partners’ needs, C-RISe has demonstrated the vast range of issues that these data can be used to understand and address.
This presentation will introduce the project, summarise key findings, and present results from the Use Cases. It will also present recommendations for further capacity building in regions with similar challenges and levels of resource.
C-RISe was funded by the UK Space Agency under the International Partnership Programme, which was designed to partner UK space expertise with overseas governments and organisations. It was funded from the Department for Business, Energy and Industrial Strategy’s Global Challenges Research Fund (GCRF)
SOLSTICE (Sustainable Oceans, Livelihoods and Food Security Through Increased Capacity in Ecosystem research in the Western Indian Ocean) is a four-year international collaborative project that aims to strengthen capacity in the Western Indian Ocean (WIO) region to address challenges of food security and sustainable livelihoods for coastal communities, where millions of people are dependent on small-scale (subsistence and artisanal) fisheries. This presentation will introduce two related SOLSTICE studies that are based on satellite observations concerned with identifying upwelling and ocean fronts in the WIO, with the eventual aim of producing potential value-added products.
The study region in the WIO are heavily influenced by the monsoon seasons with distinct phases during December to February (Northeast Monsoon) and May through September (Southeast Monsoon). These monsoon seasons drive changes in ocean physics and hence biogeochemistry through current and wind-driven mechanisms; resulting in changes in current direction as well as seasonal upwelling.
The first study is concerned with developing a data driven algorithm for identification and classification of the seasonal Somali upwelling. The methodology uses remotely sensed daily chlorophyll-a and sea surface temperature (SST) data sourced from GlobColour (https://hermes.acri.fr/) and OSTIA (https://ghrsst-pp.metoffice.gov.uk/ostia-website/index.html), respectively. To detect upwelling areas, an unsupervised machine learning (K-means clustering) approach is used, which successfully delineates upwelling core, upwelling surrounds, as well as non-upwelling ocean regions. The technique is shown to be robust with accurate classification of unseen data. Once upwelling regions have been identified, classification of extreme upwelling events was performed using confidence intervals derived from historical data. The combination of these two approaches provides the foundation for a near real time upwelling information system.
There are a wide variety of algorithms for ocean front detection based on different ocean variables. Frontal zones are important for many fisheries and can be used to target locations to maximise catch. As part of the SOLSTICE programme, we describe a simple algorithm for detecting regions associated with large ocean fronts from satellite SST (OSTIA) and apply the same approach to outputs from a numerical ocean model (NEMO). This approach has proved capable of readily identifying only the main oceanic frontal zones and their variability and location throughout the year.
Both strands of work have shown promise within their respective regions with the possibility of further application within and beyond the WIO. These identification methods have the potential for aiding fisheries management as well as providing broader scientific insights into WIO physical and biological processes.
The Special Priority Program (SPP-1889) ‘Regional Sea Level Change and Society - SeaLevel’ (2016-2022), funded by the German Research Foundation (DFG), performs a comprehensive, interdisciplinary analysis to advance our knowledge on regional sea level change (SLC), while accounting for the human-environment interactions and socio-economic developments in the coastal zone. During its second funding phase (2019-2022), SeaLevel consists of 15 projects, bringing together over 65 natural and social scientists from 23 German research institutions and a wide range of disciplines, such as physical oceanography, geophysics, geodesy, hydrology, marine geology, coastal engineering, geography, sociology, economics and environmental management. By combining diverse modern methodologies, observations and models, natural and social scientists jointly aim to create a scientific knowledge base for quantitative, integrated coastal zone management, which can be applicable to many endangered places globally and essential for safety, coastal/land use planning, and economic development.
The SeaLevel program focuses on the North and Baltic Seas with potential impacts on Germany, and the South-East Asia/Indonesia region, encompassing coastal megacities, low-lying islands and delta regions, in order to understand how coastal vulnerability, adaptation and response strategies towards SLC vary in distinctly different socio-politico-economic and cultural contexts.
The main research activities of SeaLevel are: a) Contributions to global and regional sea level changes, b) Regional biophysical and social impacts in North Europe and S.E. Asia/Indonesia, and c) Adaptation, decision analysis and governance. These research objectives include to improve the physical knowledge of SLC and regional-to-local scale projections, investigate which socio-institutional factors enable/hinder coastal societies to cope with SLC, determine the natural and social coastal systems’ responses to future SLC, and assess adaptation and risk governance strategies under given technical, cultural, socio-politico-economic constraints. Such integrated analyses require SLC information (local SL projections, storm surges, waves and extremes), uncertainty and risk measures to be provided at the coastlines.
In this presentation, we will describe the goals and status of the SeaLevel program, while overviewing particularly recent results from different SeaLevel natural and social science studies in the coastal zone, which benefit from the usage of remote sensing observations and in synergy with models and other observations.
Mangroves are highly productive tropical and subtropical ecosystems at the interface between land and sea. They provide (i) important ecosystem services to coastal communities, (ii) habitat for birds, fish, crustaceans and other invertebrates, and their root systems are particularly attractive to juvenile fish seeking shelter from predators. Mangroves also allow carbon sequestration in the soil, reduce coastal erosion and attenuate waves, providing valuable protection against climate change effects. However, globally, the extent of mangroves continues to decline, mainly due to human population growth associated to coastal development and global environmental changes.
Since the mid-2000s, there has been an increased awareness of the services rendered by mangroves. In addition to being recognized by multilateral environmental protection agreements (CITES, Ramsar...) mangroves have been the focus of many international research or conservation programs.
Remote sensing has been widely proven to be an essential tool for monitoring and mapping highly threatened, often difficult-to-access mangrove ecosystems. It provides important information for habitat inventories, detection and monitoring of changes, support for ecosystem assessment (biomass and regeneration capacity), monitoring and management of natural resources, as well as ecological and biological functions. The comprehension of mangrove ecosystems benefits greatly from the current context of Earth observation, which offers a multiplication of sensors providing data of different nature (optical and SAR), resolutions (spatial, temporal and spectral), and an open data and open source environment in constant growth.
In this context, we propose a new methodological framework dedicated to the monitoring of mangrove dynamics based on remote sensing. It aims at proposing a standardized approach for data processing in a generic framework. Ready-to-use remote sensing products will be provided for future on-line webservices, destinated to local stakeholders, policy makers and actors of mangrove dynamics monitoring and conservation.
This framework integrates multi-sensor remote sensing data, including Landsat, Sentinel, SPOT, Pleiades, PlanetScope, ALOS PALSAR and GEDI combined to in-situ measurements. IRD’s long experience and numerous ongoing research projects also allow to provide a wide range of remote-sensing products.
The site of Bombetoka bay in Madagascar was chosen to develop this methodological framework, based on multi-sensor remote sensing data and in-situ measurements. We combine unsupervised and supervised algorithms as described in figure 1.
Figure 1: the methodological framework combining unsupervised and supervised processes for the monitoring of mangrove from High (HR) and very high (VHR) resolution remote sensing data.
An unsupervised texture-based approach (FOTOTEX, https://framagit.org/espace-dev/fototex) is used to identify mangrove units using VHR data (SPOT, Pléiades). In these units, we combine VHR and HR data to extract descriptors at the unit scale.
More specifically, at high-resolution:
Landsat time series are used to characterize the distribution and long-term evolution of mangrove in relationship with sedimentary dynamics over time and space
NDVI time series from Sentinel 2 provide up to date mangrove distribution maps and indicators on the evolution of mangrove cover due to natural and anthropic processes in order to assess gain/loss of mangrove over defined areas
Sentinel 1 time series provide insights on the hydrodynamics of the bay (maximum water extent, water permanency) which is strongly related to mangrove evolution
and at very-high resolution, Pleiades and SPOT 6/7 images allow the extraction of mangrove features (texture, density, fragmentation…). The latter are combined to other variables such as AGB, canopy height (from GEDI) to characterize mangrove types (as landscape units) at fine scale.
Then in IOTA 2 (https://www.theia-land.fr/product/iota-2/), these descriptors are used as training datasets in a random forest algorithm dedicated to the mapping of mangrove units from Sentinel 1 and 2 timeseries.
A field campaign will validate which variables and features are the most effective to discriminate mangrove units in the Bombetoka bay in order to (i) assess the accuracy and reliability of the method and (ii) if necessary, adapt the methodological framework.
This approach is designed for reproducibility and genericity in order to favour (i) the updates of maps and indicators, (ii) the deployment of the method on other mangrove sites, (iii) the availability of standardized products based on remote sensing for mangrove description, monitoring and conservation. A specific on-going effort will result in a web interface that will offer in the near future innovative services for the community of users involved in mangrove monitoring.
Themes: Mangrove forest monitoring – remote sensing – Coastal areas - conservation
The ongoing climate change and the pressures created as an outcome of it have created the need to assess the sustainability degree of the coastal zone, especially in populated areas. City beaches, apart from the remarkable importance on the citizens' well-being, compose a large economic sector, which is particularly observed in a small-scale study. The beach area, simultaneously forms a natural barrier between the sea and the mainland, absorbing high hydrodynamic pressures, directly protecting the urban coastal zone from sea flooding. By articulating the anthropogenic and environmental pressures that urban beaches receive, while assessing the economic impacts on the local community, one understands the delicate balance in which these dynamic systems are. Coastal towns with sandy "pocket beaches" make a very popular tourist destination worldwide, especially in Greece.
With the rapid urban sprawl of coastal cities and the underrated dangers posed by Sea Level Rise (SLR) due to climate change, the need to study the sustainability of urban beaches has arisen. Defining Santorini Island as a pilot area, an attempt was made to compile a protocol for assessing the sustainability of beaches, with an emphasis on urban beaches. By collecting physical and socioeconomic data from the entire coastal zone of Santorini with remote sensing data and field measurements, two vulnerability indicators (physical and social) were applied in order to visualize the vulnerability on all beaches of Santorini. Following the beach's vulnerability rate, possible flooding zones of urban areas are studied from 3 climate change scenarios (RCP4.5, RCP6.0, RCP8.5). Moreover, by adding the facilities and infrastructures in the flood risk zone assessment, the potential economic loss is calculated.
To expand the use of the evaluation protocol, an open-access web service is created. Through existing data and the implementation of the hydromorphological models to the pilot area, the user can visualize the spatial zone of flood risk due to sea surges and climate change and estimate the caused economic damage. The protocol attaches indexes for estimating the current sustainability state of the beach with the use of the tourism carrying capacity index, as well as a fused physical and socio-economic carrying capacity index. Furthermore, the user will be given the option to calculate the tourist carrying capacity on any beach of his choice, entering his data and exporting the results through the platform.
The platform targets to inform the public about the pressures received by the coastal zone and the potential risks that the inhabitants of the coastal areas may be facing. The introduction of the social vulnerability will highlight the anthropogenic pressures received by the coastal zone due to the beach's exploitation. The service aims to be used as a supplemental tool in coastal zone management, as well as to inform and raise public awareness of the dangers posed by the exploitation of the urban beach and the effect of climate change.
The lack of monitoring and study of anthropogenic pressures in the coastal environment can create the impression that these dynamic systems are endangered only by the existing pressures of climate change. Through this service the user will be informed about all the risks of an urban coastal environment, being studied through the three aspects of sustainability, economic, environmental and social. The aim of this platform is to raise public awareness on issues of climate change and sustainable exploitation through user-friendly tools. The study on the vulnerability of Santorini’s coastline can be an auxiliary tool in decision-making by experts and management authorities.
Vietnam is one of the countries most affected by climate change. The study region in the Central Highlands is already vulnerable to extreme weather events such as those caused by the El Niño climate pattern. El Niño events typically occur every two to seven years and often result in severe droughts during the dry season in Vietnam’s Central Highlands, a region with a tropical savannah climate. The droughts have significant impacts on agricultural production, the economic and socioeconomic sector, and the environment. The last severe event took place in 2015/2016. The effects of anthropogenic climate change could exacerbate this situation. The Central Highlands is one of the most important agricultural regions in Vietnam, growing coffee, rubber, pepper, cashew nuts, vegetables, and fruits, all of which are in high demand worldwide and have enormous export value. In this study, Earth observation time series are analyzed to examine the condition and development of vegetation in Dak Lak and Dak Nong provinces. Current land use is analyzed for regionally adjusted mapping using Sentinel-1/2 time series to obtain details not known in the available information products, e.g., separation of cash crops such as coffee and rubber, which are of interest for subsequent land use type-specific evaluation. Further analyses focus on examining the spatial and temporal development of vegetation in the context of the recent 2015/2016 El Niño event and the consequences that may have resulted. Moderate Resolution Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI) time series are investigated for the annual dry season, December through March, from 2000 to 2020. The EVI is analyzed to determine monthly vegetation condition and its deviation from the 20-year mean, in order to provide long-term insights into the effects of drought on vegetation over two decades, as well as to distinguish normal from dry years and to identify other extreme events.
Since 1984, we have witnessed cross-sectional sandstorms in the southern and southwestern provinces of the country, which reached their peak in 2008 and 2009 and created many problems for the people of the southern, southwestern and western provinces of the country and up to Tehran. Dust concentration in Khuzestan province has been reported up to 3200 micrograms per cubic meter, which is more than 14 times the allowable limit.
Studies show that the Mesopotamia region, which includes 13 regions including Iraq, southern Iran, southern Syria and northern Saudi Arabia, and the region of southwest Asia, including 10 regions on the central plateau of Iran and the Red Sea, including 13 regions in Egypt, northeastern Sudan and It includes northwestern Saudi Arabia.
Studies show that the main cause of dust in these areas is climate change, which has several factors involved in these changes, including prolonged droughts that are associated with climate change, the greatest effect of which is to reduce Rainfall, the phenomenon of global warming as a result of greenhouse gases, fossil fuels and improper use of natural resources that have caused the destruction of vegetation and the creation of sandy and dusty roads, which should be the source of element harvesting through the management plan. Step-by-step methods as well as trackers were examined and according to environmental conditions, control methods were provided to prevent the formation of sandstorms and dust.
Since Spain's accession to the European Union in 1986, major land use transformations have been observed, which were often driven by European, national and sub-national policies. At the same time, large areas of Spain are part of the dryland ecoregion, which are particularly sensitive to ecosystem degradation and affected by climate variability and long-term changes. The good availability of data as well as past and ongoing research makes Spain an interesting case study not only to observe land transformations in the context of political, socioeconomic, and climatic conditions, but also to understand their influence on spatial patterns of change.
Hill et al. (2008) and Stellmes et al (2013) analyzed dominant land cover and land use transformations in Spain between 1989 and 2005 using NOAA AVHRR time series analyses (MEDOKADS, Koslowsky (1998)). For this purpose, several simple land surface phenology metrics, i.e., mean NDVI of the growing season, annual amplitude, and timing of maximum NDVI, were derived and linear trends were calculated. In addition, land use information (Corine Land Cover), precipitation time series, and population trends were used to consider the drivers of land cover change. Key observed processes included land abandonment in rural marginal areas and concurrent urbanization trends, as well as an increase in land use intensity associated with the exploitation of water resources.
With the studies ending in 2005, there would now be an opportunity to conduct the research over a longer period to analyze which processes have been dominant over the past 15 years. The spatial resolution of 1km x 1 km constrained the previous studies because changes within a pixel are only detectable when the magnitude and proportion of specific land use changes are large enough to alter the NDVI signal in a significant way. Mediterranean ecosystems in particular often have fine-scale heterogeneity that therefore cannot be resolved with these coarse resolution data (Stellmes et al., 2010). However, these changes may be highly relevant, particularly for local/regional land management and for understanding individual actor’s decisions (Lambin and Geist, 2006). MODIS data can also only provide limited improvement in this regard, as it is still quite coarse at 250 m x 250 m resolution.
Due to advances in the free availability and pre-processing of Landsat imagery, it is now possible to generate higher temporal resolution time series for many areas of the globe, allowing us to draw conclusions about land surface phenology metrics and their changes even at finer spatial resolution. The objective of this study is to investigate the extent to which it is possible to use Landsat time series to characterize land use change similar to the MEDOKADS study. For this purpose, we used the entire available Landsat data archive since 1986. The data were preprocessed using FORCE (Frantz, 2019) to ensure a data consistency in space and time. Based on the resulting time series, we derived land surface phenology metrics and their trends. As a first step, we compared the Landsat-based trend analyses with the MEDOKADS data for the period covering 1989 to 2005. For the mean NDVI as well as the amplitude the Landsat data show comparable trends in general, but with a much finer spatial structure. The quality of the timing of the maximum NDVI, on the other hand, is strongly dependent on the temporal density of Landsat images within each year and varies significantly. Therefore, a more robust measure is needed, e.g. the season in which the maximum NDVI occurs. In a further step, we will analyze the entire Landsat time series from 1986 to 2021 and present first results.
References:
Frantz, D. (2019): FORCE – Landsat + Sentinel-2 Analysis Ready Data and beyond: Remote Sensing 11, 1124. http://doi.org/10.3390/rs11091124.
Hill, J., Stellmes, M., Udelhoven, T., Röder, A., & Sommer, S. (2008): Mediterranean desertification and land degradation Mapping related land use change syndromes based on satellite observations. Global and Planetary Change, 64, 146-157.
Koslowsky, D., 1996. Mehrjährige validierte und homogenisierte Reihen des Reflexionsgrades und des Vegetationsindexes von Landoberflächen aus täglichen AVHRR-Daten hoher Auflösung. Institute for Meterology, Freie Universität Berlin, Berlin.
Lambin, E.F., Geist, H.J., (2006): Land Use and Land Cover Change. Local Processes and Global Impacts. Springer Verlag, Berlin, Heidelberg, New York.
Stellmes, M., Udelhoven, T., Röder, A., Sonnenschein, R. and Hill, J. (2010): Dryland observation at local and regional scale - comparison of Landsat TM/ETM+ and NOAA AVHRR time series. Remote Sensing of Environment, 114 (10), 2111–2125, doi:10.1016/j.rse.2010.04.016.
Stellmes, M., Röder, A., Udelhoven, T. & Hill, J. (2013): Mapping syndromes of land change in Spain with remote sensing time series, demographic and climatic data. Land Use Policy, 30, 685-702.
During the last week of October 2021 an intense mediterranean hurricane (medicane), named Apollo by the Eumetnet Storm Naming project, affected many countries on the Mediterranean coasts. The deaths toll peaked up to 7 people, due to flooding from the cyclone in the countries of Tunisia, Algeria, Malta, and Italy.
The Apollo medicane persisted over such areas for about one week (24 October – 1 November 2021) and produced very intense rainfall phenomena and widespread flash-flood and flood episodes especially over eastern Sicily on 25-26 October 2021.
CIMA Foundation hydro-meteorological forecasting chain, including the cloud-resolving WRF model assimilating radar data and in situ weather stations (WRF-3DVAR), the fully distributed hydrological model Continuum, the automatic system for water detection (AUTOWADE), and the hydraulic model TELEMAC-2D, has been operated in real-time to predict the weather evolution and the corresponding hydrological and hydraulic impacts of the Medicane Apollo, in support of the Italian Civil Protection Department early warning activities and in the framework of the H2020 LEXIS and E-SHAPE projects.
This work critically reviews the forecasting performances of each model involved in the CIMA hydro-meteorological chain, with special focus on temporal scales ranging from very short-range (up to 6 hours ahead) to short-range forecasts (up to 48 hours ahead).
The WRF-3DVAR model showed very good predictive capability concerning the timing and the location of most intense rainfall phenomena over Catania and Siracusa provinces in Sicily, thus enabling also very accurate discharge peaks and timing predictions for the creeks hydrological network peculiar of eastern Sicily. Based on the WRF-3DVAR model predictions, the daily run of the AUTOWADE tool, using Sentnel-1 (S1) data, was anticipated with respect to the schedule in order to quickly produce a flood map (S1 acquisition performed on Oct. 25th, 2021 at 5.00 UTC, flood map produced on the same day at 13.00 UTC). Moreover, considering that no S1 images of eastern Sicily were available during the period Oct. 26-30, 2021, an ad hoc tasking of the COSMO-SkyMed satellite constellation was performed, again based on the on the WRF-3DVAR predictions, to overcome the S1 data latency. The resulting automated operational mapping of floods and inland waters was integrated with the subsequent execution of the hydraulic model TELEMAC. The medicane Apollo case study paves the way to future similar applications in the Mediterranean areas where intense rainfall processes are expected to become more frequent in light of the ongoing climate change phenomena.
Climate change is intensifying the water cycle, bringing more intense precipitation and flooding in some regions, as well as longer and stronger droughts in others. The number of short-term and highly localized phenomena, such as thunderstorms, hailstorms, wind gusts or tornadoes, is expected to grow further in the coming years, with important repercussions in air traffic management activities (ATM). One of the challenges for meteorologists is to improve the location and timing of such events that develop on small spatial and temporal scales. In this regard, the H2020 Satellite-borne and IN-situ Observations to Predict The Initiation of Convection for ATM (SINOPTICA) project aims to demonstrate that numerical weather forecasts with high spatial and temporal resolution, benefiting from the assimilation of radar data, in situ weather stations, GNSS and lightning data, could improve the prediction of severe weather events for the benefit of air traffic control (ATC) and air traffic management (ATM).
As part of the project, three severe weather events were identified on the Italian territory which resulted in the closure of the airport with heavy delays on arrivals and departures as well as numerous diversions. The data of the numerical simulations, carried out with the Weather Research and Forecasting (WRF) model and the 3D-VAR assimilation technique, will be integrated into Arrival Manager - air traffic control and management system. Arrival Manager generates and optimizes 4D trajectories avoiding areas affected by adverse phenomena with the objectives of increasing flight safety and predictability and reducing controllers’ and pilots’ workload. In addition to the numerical simulations, a nowcasting technique called PHAse- diffusion model for STochastic nowcasting (PhaSt) has been investigated to further improve ATC supporting systems highly localized convective events. This work presents the results of the WRF and PhaSt experiments, for the Milan Malpensa case study of 11 May 2019, demonstrating that it is possible to improve the prediction of such events in line with expectations and ATM needs.
Funded by the European Commission, the H2020 EuroSea project has the objective to improve the European ocean observing system as an integrated entity within a global context, delivering ocean observations and forecasts to advance scientific knowledge about ocean climate, marine ecosystems, and their vulnerability to human impacts and to demonstrate the importance of the ocean to an economically viable and healthy society. In the framework of this project, our goal is to improve the design of multi-platform in situ experiments for validation of high-resolution SWOT observations with the aim of optimizing the utility of these observing platforms. To achieve this goal, a set of Observing System Simulation Experiments (OSSEs) are developed to evaluate different sampling strategies and their impact on the reconstruction of fine-scale sea surface height fields and currents. Observations from CTD, ADCP, gliders, and altimetry are simulated from three nature run models to study the sensitivity of the results to the model used. Different sampling strategies are evaluated to analyse the impact of the spatial and temporal resolution of the observations, the depth of the measurements, the season of the multi-platform experiment, as well as the impact of changing rosette CTD casts for a continuous underway CTD or gliders. The reconstructed fields are obtained after applying the classic optimal interpolation algorithm to the different configurations of the simulated observations. In addition, other methods of reconstruction based on (i) machine-learning techniques, (ii) modelling data assimilation and (iii) the MIOST tool are tested. The analysis focuses on the western Mediterranean Sea, in a region located within a swath of SWOT during the fast-sampling phase.
The MedRIN (Mediterranean Regional Information Network; ) established in 2018, is a network to share developments and further Earth Observation (EO) scientific collaboration amongst European, North African, Levant, and American colleagues. The MedRIN is coupled with the framework of the Global Observations of Forests Cover and Land Dynamics (GOFC-GOLD; https://start.org/programs/gofc-gold/)) and serves as a liaison between land-cover/land use change remote sensing scientists and stakeholders in the Mediterranean Region. MedRIN keeps its members well-informed with the latest advancements in Earth Observation applications based on NASA and ESA satellite data and data products. Furthermore, MedRIN aims to support tackling regional and local challenges, as described by the United Nations Sustainable Development Goals (SDGs). The objectives of the MedRIN network are based on the priority topics of the Mediterranean region and the neighboring countries: 1) Urban and built-up areas (wildland urban interface, population dynamics and how that affects landscape), 2) Rural areas / Agriculture, Forestry and wildlands (monitoring dynamic landscape changes), 3) Hazards (fires including agricultural fires, earthquakes, floods, etc.) , 4) Soil and water resources management (Irrigation/Hydrology, Soil degradation, 5) Desertification), Climate change, 6) Education/Training to be a major component of all proposed priorities (TAT NASA-ESA model) & State of the Art Techniques (Artificial Intelligence). In accordance with the GOFC-GOLD family of networks the following MedRIN objectives have been established: a) Better coordination and linkage of monitoring systems and databases across the Mediterranean community member countries; b) Strengthening and upgrading regional/national EO networks; c) Alignment of multi-modal and multi-source data compliant to international norms; d) Utilization of Copernicus and relevant freely distributed services in the region by end users; e) Contribution to free publicly-available data through interoperable databases and services.
The additional benefit for the Mediterranean region, will be the synergies emerging from the collaborative efforts. The MedRIN is accessible to any entity and individual in the region for peaceful purposes and is expected to produce results and services for the well-being of the citizens and the sustainability of the use of resources throughout the region. Existing networks and collaborations are leveraged, while cooperation across disciplines and levels of decision and implementation throughout the stakeholder's spectrum are supported. The network also will help support common participation in projects /proposals and strive to develop collaborative structures to enable such. The MedRIN hosts annual meetings and workshops welcoming Mediterranean researchers to share in their scientific development, mature relations with colleagues in the region, and provide training to young scientists and community members on various EO topical areas, particularly focused on land cover change dynamics. The MedRIN also participates in joint meetings and workshops with other regional networks where common themes and issues are discussed, and collaborations established (i.e., South Central European Regional Information Network (SCERIN). MedRIN aims at keeping its members abreast with the latest advancements in earth observation applications based on NASA and ESA satellite data and data products, and the MedRIN includes training and capacity building as major components of all its activities. The MedRIN coordinators are collaborating to develop a regional “inter-institutional” program which would enable Master and PhD students working on MedRIN issues to transfer between different institutions in the network. The network has also facilitated the participation of young scientists from the MedRIN region in a previous solicitation from NASA for collaboration on land-use/land-cover change issues.
This presentation will further describe the MedRIN network, its outreach capabilities and priorities and future plans for network functions and planned events in 2022 and beyond.
The EXCELSIOR project (www.excelsior2020.eu) has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No 857510 and from the Government of the Republic of Cyprus through the Directorate General for the European Programmes, Coordination and Development. The EXCELSIOR project aims to upgrade the existing Remote Sensing and Geo-Environment Lab established within the Cyprus University of Technology into a sustainable, viable and autonomous ERATOSTHENES Centre of Excellence (ECoE) for Earth Surveillance and Space-Based Monitoring of the Environment. The ECoE for Earth Surveillance and Space-Based Monitoring of the Environment will provide the highest quality of related services both on the National, European and International levels through the ‘EXCELSIOR’ Project under H2020 WIDESPREAD TEAMING. The vision of the ECoE is to become a world-class Digital Innovation Hub (DIH) for Earth observation and Geospatial Information becoming the reference Centre in the Eastern Mediterranean, Middle East and North Africa (EMMENA) within the next 7 years. There are some distinct needs and opportunities that motivate the establishment of an Earth Observation Centre of Excellence in Cyprus. These are primarily related to the geostrategic location of Cyprus to solve complex scientific problems and address concrete user needs in the Eastern Mediterranean, Middle East and Northern Africa (EMMENA) as well as South-East Europe. The ECoE has the potential to become a catalyst for facilitating and enabling International cooperation in EMMENA. The starting point for fostering regional scientific collaboration and exploiting untapped market opportunities in EMMENA are several EO and RS networks, which include GEO, MEDRIN NASA, NEREUS, , GEO-CRADLE, as well as the network chains from the ECoE partners of TROPOS NOA and DLR.
The ECoE will have the following flagship equipment:
• Satellite data direct receiving station: The ECoE, in cooperation with DLR, will establish an EO Satellite Data Acquisition Station (DAS) to be able to directly receive data from EO satellite missions, which will allow Near Real Time (NRL) monitoring and thereby provide time-critical information for science and products within the receiving cone of the station, namely, over the EMMENA region. The ECoE will acquire and process data in direct pass-through mode over the Eastern Mediterranean area. Cyprus comprises a unique location for this antenna, as it will be located in the farthest South-Eastern location within the European Union, thus providing an extended coverage compared to other European antenna locations, including a wide range of data from Eastern Europe, Northern Africa and the Middle East. This includes critical real time maritime surveillance areas such as the Eastern Mediterranean, the Black Sea, the Caspian Sea, the Persian Gulf and the Red Sea.
• Ground-based atmospheric remote sensing station (supersite for aerosol and cloud monitoring): The ECoE, in cooperation with TROPOS, will establish a ground-based atmospheric remote sensing station (GBS) by consolidating all necessary infrastructure to set up a supersite for calibration/ validation, aerosol and cloud monitoring. The instruments to be installed within the GBS include a Cloudnet Station with SLDR-Cloud Radar, Microwave Radiometer, Doppler lidar and Laser Disdrometer, a PollyXT lidar provided by TROPOS, as well as auxiliary instruments for the Cloudnet station, including Ceilometer, SAEMS/DOAS and Cloud scanner.
The ERATOSTHENES CENTRE OF EXCELLENCE stakeholders hub can be used to engage in the EMMENA region more collaborations for the benefit of the citizens and the region. Indeed, this presentation shows how the EXCELSIOR TEAMING PROJECT & ECoE strategy can strengthening cooperation on earth observation in the EMMENA region. in different areas such as Climate changes, disaster risk reduction, water resources management, data analytics, energy etc.
Blue Economy encompasses those sectors and activities related to oceans, seas and coasts,
such as fisheries, energy, aquaculture, natural resources, logistics, safety and security,
transport, port activities, tourism, and shipbuilding repairs. Europe represents one of the
leading maritime power in the world. In 2018, the EU Blue economy generated €750 billion in
turnover and €218 billion in gross value added and directly employed about 5 million people.
Satellite applications bring an added-value in creating innovative and sustainable growth path
for many industries, as in the maritime domain. In this context, satellite technology provides
marine operators with reliable real-time information while ensuring coverage of vast and
unreachable areas. At regional level, a growing attention is dedicated to the Mediterranean
area, currently threatened by multiple challenges: biodiversity protection, increased human
activities due to overtourism, disaster management.
Satellite data provides a plethora of reliable and easy-to-use solutions for aquaculture,
fisheries, algal bloom, safety and security, and coastal development, to name a few.
Nevertheless, the take-up of satellite-based solutions in the region is far from being achieved.
Scepticism persists from the end- users’ side due to a series of factors as a lack of clear
communication with service providers; a poor understanding of the benefits related to the
integration of satellite-based solutions in their workflow; financial constraints; and a lack of
knowledge and competencies for implementing and using satellite-based services efficiently.
Recently, Eurisy launched the initiative Space4Maritime. The objective is to identify and
understand the needs of European maritime end-user communities, facilitating the dialogue
with the space industry and the uptake of satellite services. In this frame, Eurisy started a
series of interviews with end-users mostly located in the Mediterranean region. The overall
objective is to identify the existing operational solutions applicable in the area through
examples of practical uses of EO as well as the bottlenecks that harness the potential of
satellite applications for the sustainable growth of the Blue Economy. The paper will mainly
address public authorities, providing them with a set of recommendations on how to foster
cooperation with maritime operators. Lastly, the paper also targets potential new end-users
interested in integrating satellite solutions in their workflow.
The project
Soil sealing – also called imperviousness – is defined as a change in the nature of the soil leading to its impermeability. Soil sealing has several impacts on the environment, especially in urban areas and local climate, influencing heat exchange and soil permeability; soil sealing monitoring is crucial for the Mediterranean coastal areas, where soil degradation combined with drought and fires contributes to desertification.
Some artificial features like buildings, paved roads, paved parking lots, and other artifacts can be considered to have a long duration. In general, these land cover types are referred to as permanent soil sealing because the probability of coming back to natural use is low. Other land cover features included in the definition of soil sealing can be considered reversible. For them, the probability of coming back to natural use is higher. The land cover classes that are included in the reversible soil sealing have been defined with the users of the project, and include solar panels, construction site in early stage, mines and quarries, long-term plastic-covered soil in agricultural areas (e.g., non-paved greenhouses).
The project Mediterranean Soil Sealing, promoted by the European Space Agency (ESA) in the frame of the EO Science for Society – Mediterranean Regional Initiative, aims to provide specific products related to soil sealing, its degree and reversible soil sealing over the Mediterranean coastal areas by exploiting EO data with an innovative methodology capable to optimise and scale-up their use with other non-EO data. Such products have to be designed to allow – concerning current practices and existing services – a better characterisation, quantification and monitoring within time of soil sealing over the Mediterranean basin, supporting users and stakeholders involved in monitoring and preventing land degradation. The project started in March 2021, will produce the first results in March 2022 and the final products in March 2023.
The targeted products are high-resolution maps of the degree of soil sealing and the reversible soil sealing over the Mediterranean coastal areas (within 20km from the coast) for the 2015-2020 time period, at yearly temporal resolution with a targeted spatial resolution of 10m.
Stakeholders, products exploitation and geoanalytics indicators.
The involvement of stakeholders and end-users is an essential element of the project, as stated by ESA in the call for proposals. Since from the early stage of the proposal, efforts have been made to reach a diversity of users and stakeholders; the presence of ISPRA in the consortium is a plus for the project in this sense.
We group the users into classes: municipalities; sub-national agencies or local governmental institution; national institutions and research centers; regional institutions (EEA) and international (UN). Users are kept updated and focused on project activities by providing them concrete elements on which to ask for direct feedback. A questionnaire was shared with the stakeholders and discussed the result in a dedicated workshop held on 28/05/2021. About 20 people, from 13 different institutions, participated in the workshop. The users are also involved in the definition of a new way to serve them the project results. Instead of delivering just a set of maps, the team is developing an extensive collection of indicators and analytics that will be integrated into an interactive dashboard that will allow the users to access quickly and easily the information they need.
The team
The project team is led by Planetek Italia, and composed by ISPRA and CLS.
Planetek Italia is in charge of the development of the infrastructure, the engineering of the algorithms and the communication activities. CLS is in charge of the soil sealing mask and of the experimental reversible soil sealing processing algorithms, ISPRA of the soil sealing degree processing algorithms. The interaction with the users is led by ISPRA, institutionally involved in the land degradation theme into international and regional organisations and the national body responsible for the theme in Italy.
Methodology
Introduction
The project uses Sentinel-2 Level 1C as optical/multispectral source of data and Sentinel-1 SLC as radar source. Different in-situ data are prepared for the machine learning steps depending of the target. The developed methodology aims at being suitable to the Mediterranean coasts through automatic processing of satellite data.
Considering the heterogeneity of landscapes and the extent of the Mediterranean area, three alternative approaches for S-2 calibration have been developed, to cope with the availability of training data: NDVI Calibration, Linear Regression, and Artificial Neural Network. All these three methods share the common preprocessing of data.
The mission of the International Charter Space and Major Disasters is to facilitate the acquisition and the delivery of EO data at no cost to support disaster management and humanitarian relief operations in areas of the world affected by natural or man-made disasters.
In this framework the EO data received from the Charter members is provided to the end-users via the Charter Operational System (COS-2) managed by the European Space Agency. Since 2017, ESA has been involved to augment COS-2 with an on-line processing environment to facilitate the access and processing of big volumes of EO data (often hundreds of images) provided by the Charter members into an activation. After prototyping, development and the transfer to operations, in September 2021 this platform, named ESA Charter Mapper, was officially opened to support Charter operations.
The main objective of the ESA Charter Mapper is to support the Charter Project Manager (PM) and the Value Adder (VA) during an Charter activation with the provision of a suite of on-line EO-based services with co-located multi-sensor EO data collections ingested from COS-2.
This platform is the first massive cloud processing platform handling a constellation of 42 satellites (33 EO missions) from 15 Agencies and using state of the art technology such as Kubernetes, TiTiler, SpatioTemporal Asset Catalog (STAC) and COG formats. STAC Assets of EO data are catalogued in the ESA Charter Mapper using Common Band Names (CBN) classes that refer to common band ranges in the EM spectrum that allows a one-to-one mapping of multi-mission and multi-sensor bands.
The ESA Charter Mapper lets PM/VA access multi-sensor EO data and metadata, perform visual analysis, and perform EO-based processing to extract geo-information from imagery.
The current service portfolio includes Pre-Processing, Advanced, and Specialised processors for specific hazard types. Two main types of Assets can be derived from both systematic and on-demand processing services: Visual Products (multiple-band Assets Overview images as grayscale or false color RGB composites) and Physical meaning Products (single-band Assets for TOA reflectance, Brightness Temperature, Sigma Nought in dB, spectral indexes, burned areas, surface displacements, flood and hotspot bitmasks).
Concerning visualization of EO data, multiple pre-defined RGB band composites can be directly viewed by PM/VA in the map at full resolution after the systematic calibration of ingested optical and radar EO data. Furthermore users can also combine single-band Assets of Calibrated Datasets to create custom intra-sensor RGB band composites on the fly. Users can also visually compare pre- and post-event images directly in the map using a Vertical Slider Bar and apply GIS functionalities to get pixel values, visualize on the fly changes in the imagery by stretching the histogram, and crop images. These visual change detection tools are quite versatile, they can be applied to many different natural disasters, and effective to have a fast overview of the most affected areas. The comparison of thematic maps is also possible, allowing to depict the evolution of catastrophic events.
In terms of EO data exploitation, thanks to the automatic generation of Assets, as the EO data is received by ESA Charter Mapper, each processing service is able to generate geo-information products systematically or on-demand within very short times (e.g. spectral indexes, pan-sharpened images, binary map from change detection algorithm, combination of SAR intensity and multitemporal InSAR coherence). Use cases and processing results from selected activations will be presented in this work.
FLEX Instrument Flight Model Subsystems Features and Performance: Focal Plane System, Low Resolution Spectrometer, and Double Slit Assembly
H. Bittner1, Q. Mühlbauer1, M. Ibrügger1, C. Küchel1, A. Altbauer1, A. Serdyuchenko1, P. Sandri1, M. Kroneberger1, M. Erhard1, G. Huber1, A. Althammer1, Y. Gerome1, R. Wheeler2, T. Phillips2, Z. Locke2, P. Trinder2, S. Betts2, C. Greenaway2,3, Alejandro Fernández4, Alberto Antón4, Matthias Mohaupt5, Falk Kemper5, Uwe Zeitner5, Uwe Hübner6, Alexander Kalies7, Matthias Zilk7, Matthias Burkhardt7, Michael Helgert7
1) OHB System AG, 82234 Wessling, Germany
2) Teledyne e2v, Chelmsford, Essex, UK
3) Physics Department, Imperial College London, UK
4) Airbus CRISA, 28760 Tres Cantos, Spain
5) Fraunhofer Institute for Applied Optics and Precision Engineering (IOF), 07745 Jena, Germany
6) Leibniz Institute of Photonic Technology, 07745 Jena, Germany
7) Carl Zeiss Jena GmbH, 07745 Jena, Germany
The FLuorescence EXplorer (FLEX) constitutes ESA’s eighth Earth Explorer Mission (EE8), the corresponding space-borne FLEX instrument is the FLuORescence Imaging Spectrometer (FLORIS), operating in the 500-780 nm spectral band. FLEX will provide information regarding the vegetation fluorescence signal, essential for a quantitative evaluation of the status of health of vegetation.
The FLORIS instrument incorporates a High (HRSPE) and a Low Resolution Spectrometer (LRSPE), fed by a double slit beamsplitting assembly (two slits with 84 µm x 44.1 mm each), itself illuminated through a nadir-looking telescope followed by a polarization scrambler. The operative spectral regions are 500–758 nm for LRSPE and 677–780 nm for HRSPE. The two imaging spectrometers are operated in push-broom mode. The Focal Plane of the LRSPE has a single detector unit with a spectral sampling of down to 0.6 nm while the Focal Plane of the HRSPE has two co-aligned detector units to cover the spectral band with a sampling of down to 0.1 nm. The on-ground sampling distance is 293 m. The three Detector Units (Teledyne e2v CCD325-80) are high-dynamic low-noise back-illuminated frame-transfer CCDs with 450 (spectral dim.) x 1060 (spatial dim.) pixels of 28 µm (spectral dim.) x 42 µm (spatial dim.).
OHB System AG is responsible for the development of the HR and LR Focal Plane Systems (HR FPS and LR FPS), the Low Resolution Spectrometer, and the Double Slit Assembly (SLITA). Contributions to the development have been as follows: Focal Plane Detector Units were developed by Teledyne e2v. Airbus CRISA developed the Front-End Electronic units for the HR and LR Focal Plane System. These units provide supplies and clocks for the detectors, and adapt and filter the video signal before performing the analog-to-digital conversion with a resolution of 16 bits at a sampling frequency of 1.7 MHz. Fraunhofer IOF (Jena) provided the sophisticated slit devices and components, as well as the primary mirror for the LRSPE. Carl Zeiss (Jena) provided lens, mirror, and the highly efficient, low-straylight grating for the LRSPE.
This paper presents the specific features of these three subsystems as well as their performance characteristics from the running flight-model test campaigns.
The project is funded by ESA under contract number Leonardo S.p.A. Contract
No. 4000118350/FLEX B2-CD/OHB.
The main payload of Sentinel-6 Michael Freilich is a dual-band (Ku and C) pulse-width limited radar altimeter, called Poseidon-4, that transmits pulses at a high pulse repetition frequency thus making the received echoes phase coherent and suitable for azimuth processing. Among the different unique characteristic of Poseidon-4, it is worth recalling that digital pulse range compression is performed on-board to transform the received chirp using a matched filter. Thus, a proper calibration approach has been developed, including both an internal and external calibration.
In particular, this abstract presents the long-term monitoring of the internal calibration data for chirp replica and for attenuator that are processed on ground by ad-hoc tools provisioned by Aresys to ESA:
• CAL1 INSTR: This mode measures the internal instrument transfer function in Ku band and in C band. The results of these measures can be taken into account at Digital compression level in the chirp replica(f) to optimize the impulse response of the instrument.
• CAL ATT: Since amplification gain control knowledge directly impacts the σ0 measurements, an attenuation calibration is included in the design. This measures the top of the range impulse response within the full attenuation dynamic range that is then matched to a corresponding value on ground.
The performance of Poseidon-4 altimeter is here presented by analysis of the long-term monitoring of the on-ground processed data from CAL1 INSTR and CAL ATT calibration sequences commanded on board. The analysis of such calibration data allows to verify that the instrument has reached the requirements and that it is maintaining the key performance over its life. Moreover, in-depth analysis of the calibration data revealed how the instrument depends on its temperature and on the orbit of the satellite.